Test Report: Docker_Linux_crio_arm64 21132

                    
                      58bc2bd16d03f6a9f0bea0abc55166132e65bd2e:2025-09-07:41313
                    
                

Test fail (12/325)

x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-055380 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-055380 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f0d00cbf-3816-410c-9854-7084fe7e6c17] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f0d00cbf-3816-410c-9854-7084fe7e6c17] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.006165837s
addons_test.go:694: (dbg) Run:  kubectl --context addons-055380 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:694: (dbg) Non-zero exit: kubectl --context addons-055380 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS": exit status 1 (160.962357ms)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
addons_test.go:696: printenv creds: exit status 1
--- FAIL: TestAddons/serial/GCPAuth/FakeCredentials (11.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (155.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-055380 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-055380 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-055380 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2ea00784-9b9b-4068-90a2-4278091d56e8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2ea00784-9b9b-4068-90a2-4278091d56e8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003466016s
I0907 00:14:48.769355  296249 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-055380 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.402222082s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-055380 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-055380
helpers_test.go:243: (dbg) docker inspect addons-055380:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92432128cb4e9191a8727bbe91ebb44a5ef14a410162d8bca9f99b1552835ca2",
	        "Created": "2025-09-07T00:10:24.609616993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297409,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-07T00:10:24.670344468Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/92432128cb4e9191a8727bbe91ebb44a5ef14a410162d8bca9f99b1552835ca2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92432128cb4e9191a8727bbe91ebb44a5ef14a410162d8bca9f99b1552835ca2/hostname",
	        "HostsPath": "/var/lib/docker/containers/92432128cb4e9191a8727bbe91ebb44a5ef14a410162d8bca9f99b1552835ca2/hosts",
	        "LogPath": "/var/lib/docker/containers/92432128cb4e9191a8727bbe91ebb44a5ef14a410162d8bca9f99b1552835ca2/92432128cb4e9191a8727bbe91ebb44a5ef14a410162d8bca9f99b1552835ca2-json.log",
	        "Name": "/addons-055380",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-055380:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-055380",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92432128cb4e9191a8727bbe91ebb44a5ef14a410162d8bca9f99b1552835ca2",
	                "LowerDir": "/var/lib/docker/overlay2/8dc6960eeade0851c22ae7311a15850776d6117a1ed39ea71cba84885defd63b-init/diff:/var/lib/docker/overlay2/5a4b8b8cbe09f4c7d8197d949f1b03b5a8d427ad9c5a27d0359fd04ab981afab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8dc6960eeade0851c22ae7311a15850776d6117a1ed39ea71cba84885defd63b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8dc6960eeade0851c22ae7311a15850776d6117a1ed39ea71cba84885defd63b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8dc6960eeade0851c22ae7311a15850776d6117a1ed39ea71cba84885defd63b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-055380",
	                "Source": "/var/lib/docker/volumes/addons-055380/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-055380",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-055380",
	                "name.minikube.sigs.k8s.io": "addons-055380",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3addf1475e98ded3379d06f1106e6bd6949e1f425348b42b806a05cab5038846",
	            "SandboxKey": "/var/run/docker/netns/3addf1475e98",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-055380": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:4c:65:7e:d0:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2f91f5c0a2b0a368a7397a3e6126db7503f4f78594418bf8b51ab643d64948a4",
	                    "EndpointID": "b8b0ba95eab9de7d4c154e0c1cc20136778aa18ab093b8a76f3435b8fe29cb46",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-055380",
	                        "92432128cb4e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-055380 -n addons-055380
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-055380 logs -n 25: (1.767923115s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-005112                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-005112 │ jenkins │ v1.36.0 │ 07 Sep 25 00:09 UTC │ 07 Sep 25 00:09 UTC │
	│ start   │ --download-only -p binary-mirror-905152 --alsologtostderr --binary-mirror http://127.0.0.1:44549 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-905152   │ jenkins │ v1.36.0 │ 07 Sep 25 00:09 UTC │                     │
	│ delete  │ -p binary-mirror-905152                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-905152   │ jenkins │ v1.36.0 │ 07 Sep 25 00:09 UTC │ 07 Sep 25 00:09 UTC │
	│ addons  │ enable dashboard -p addons-055380                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:09 UTC │                     │
	│ addons  │ disable dashboard -p addons-055380                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:09 UTC │                     │
	│ start   │ -p addons-055380 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:09 UTC │ 07 Sep 25 00:12 UTC │
	│ addons  │ addons-055380 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:12 UTC │ 07 Sep 25 00:12 UTC │
	│ addons  │ addons-055380 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:13 UTC │ 07 Sep 25 00:13 UTC │
	│ addons  │ enable headlamp -p addons-055380 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:13 UTC │ 07 Sep 25 00:13 UTC │
	│ addons  │ addons-055380 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:13 UTC │ 07 Sep 25 00:13 UTC │
	│ ip      │ addons-055380 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:13 UTC │ 07 Sep 25 00:13 UTC │
	│ addons  │ addons-055380 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:13 UTC │ 07 Sep 25 00:13 UTC │
	│ addons  │ addons-055380 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:13 UTC │ 07 Sep 25 00:13 UTC │
	│ addons  │ addons-055380 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:13 UTC │ 07 Sep 25 00:13 UTC │
	│ ssh     │ addons-055380 ssh cat /opt/local-path-provisioner/pvc-fd80b47b-65ab-4f10-9a0a-f519bd7a8560_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:13 UTC │ 07 Sep 25 00:13 UTC │
	│ addons  │ addons-055380 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:13 UTC │ 07 Sep 25 00:14 UTC │
	│ addons  │ addons-055380 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:13 UTC │ 07 Sep 25 00:13 UTC │
	│ addons  │ addons-055380 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:14 UTC │ 07 Sep 25 00:14 UTC │
	│ addons  │ addons-055380 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:14 UTC │ 07 Sep 25 00:14 UTC │
	│ addons  │ addons-055380 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:14 UTC │ 07 Sep 25 00:14 UTC │
	│ ssh     │ addons-055380 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:14 UTC │                     │
	│ addons  │ addons-055380 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:14 UTC │ 07 Sep 25 00:14 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-055380                                                                                                                                                                                                                                                                                                                                                                                           │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:14 UTC │ 07 Sep 25 00:14 UTC │
	│ addons  │ addons-055380 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:14 UTC │ 07 Sep 25 00:14 UTC │
	│ ip      │ addons-055380 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-055380          │ jenkins │ v1.36.0 │ 07 Sep 25 00:17 UTC │ 07 Sep 25 00:17 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/07 00:09:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:09:59.798843  297008 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:09:59.799009  297008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:09:59.799038  297008 out.go:374] Setting ErrFile to fd 2...
	I0907 00:09:59.799058  297008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:09:59.799325  297008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 00:09:59.799805  297008 out.go:368] Setting JSON to false
	I0907 00:09:59.800685  297008 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6749,"bootTime":1757197051,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0907 00:09:59.800785  297008 start.go:140] virtualization:  
	I0907 00:09:59.804187  297008 out.go:179] * [addons-055380] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0907 00:09:59.807944  297008 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 00:09:59.808083  297008 notify.go:220] Checking for updates...
	I0907 00:09:59.813679  297008 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:09:59.816547  297008 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 00:09:59.819511  297008 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	I0907 00:09:59.822454  297008 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0907 00:09:59.825345  297008 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:09:59.828508  297008 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:09:59.859142  297008 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0907 00:09:59.859267  297008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:09:59.919075  297008 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-07 00:09:59.910235974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:09:59.919190  297008 docker.go:318] overlay module found
	I0907 00:09:59.922305  297008 out.go:179] * Using the docker driver based on user configuration
	I0907 00:09:59.925181  297008 start.go:304] selected driver: docker
	I0907 00:09:59.925206  297008 start.go:918] validating driver "docker" against <nil>
	I0907 00:09:59.925221  297008 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:09:59.925948  297008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:09:59.978792  297008 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-07 00:09:59.969968313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:09:59.978946  297008 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0907 00:09:59.979184  297008 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:09:59.982080  297008 out.go:179] * Using Docker driver with root privileges
	I0907 00:09:59.984953  297008 cni.go:84] Creating CNI manager for ""
	I0907 00:09:59.985025  297008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0907 00:09:59.985041  297008 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0907 00:09:59.985120  297008 start.go:348] cluster config:
	{Name:addons-055380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-055380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0907 00:09:59.988111  297008 out.go:179] * Starting "addons-055380" primary control-plane node in "addons-055380" cluster
	I0907 00:09:59.990907  297008 cache.go:123] Beginning downloading kic base image for docker with crio
	I0907 00:09:59.993805  297008 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0907 00:09:59.996727  297008 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:09:59.996795  297008 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0907 00:09:59.996840  297008 cache.go:58] Caching tarball of preloaded images
	I0907 00:09:59.996935  297008 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0907 00:09:59.996941  297008 preload.go:172] Found /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0907 00:09:59.996960  297008 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0907 00:09:59.997286  297008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/config.json ...
	I0907 00:09:59.997316  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/config.json: {Name:mk12add556b882c2fb80c4be5f157a08e22c9d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:00.046040  297008 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0907 00:10:00.046195  297008 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0907 00:10:00.046222  297008 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0907 00:10:00.046229  297008 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0907 00:10:00.046238  297008 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0907 00:10:00.046244  297008 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from local cache
	I0907 00:10:17.859173  297008 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from cached tarball
	I0907 00:10:17.859225  297008 cache.go:232] Successfully downloaded all kic artifacts
	I0907 00:10:17.859266  297008 start.go:360] acquireMachinesLock for addons-055380: {Name:mk44ab34d24c0a9f8ff6d342ab38a25e4612f45c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:10:17.860008  297008 start.go:364] duration metric: took 714.351µs to acquireMachinesLock for "addons-055380"
	I0907 00:10:17.860044  297008 start.go:93] Provisioning new machine with config: &{Name:addons-055380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-055380 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:10:17.860116  297008 start.go:125] createHost starting for "" (driver="docker")
	I0907 00:10:17.863630  297008 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0907 00:10:17.863885  297008 start.go:159] libmachine.API.Create for "addons-055380" (driver="docker")
	I0907 00:10:17.863930  297008 client.go:168] LocalClient.Create starting
	I0907 00:10:17.864067  297008 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem
	I0907 00:10:18.115380  297008 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/cert.pem
	I0907 00:10:18.176999  297008 cli_runner.go:164] Run: docker network inspect addons-055380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0907 00:10:18.192869  297008 cli_runner.go:211] docker network inspect addons-055380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0907 00:10:18.192963  297008 network_create.go:284] running [docker network inspect addons-055380] to gather additional debugging logs...
	I0907 00:10:18.192985  297008 cli_runner.go:164] Run: docker network inspect addons-055380
	W0907 00:10:18.208466  297008 cli_runner.go:211] docker network inspect addons-055380 returned with exit code 1
	I0907 00:10:18.208496  297008 network_create.go:287] error running [docker network inspect addons-055380]: docker network inspect addons-055380: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-055380 not found
	I0907 00:10:18.208509  297008 network_create.go:289] output of [docker network inspect addons-055380]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-055380 not found
	
	** /stderr **
	I0907 00:10:18.208647  297008 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0907 00:10:18.225832  297008 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a84b20}
	I0907 00:10:18.225869  297008 network_create.go:124] attempt to create docker network addons-055380 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0907 00:10:18.225925  297008 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-055380 addons-055380
	I0907 00:10:18.283998  297008 network_create.go:108] docker network addons-055380 192.168.49.0/24 created
	I0907 00:10:18.284035  297008 kic.go:121] calculated static IP "192.168.49.2" for the "addons-055380" container
	I0907 00:10:18.284137  297008 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0907 00:10:18.299758  297008 cli_runner.go:164] Run: docker volume create addons-055380 --label name.minikube.sigs.k8s.io=addons-055380 --label created_by.minikube.sigs.k8s.io=true
	I0907 00:10:18.318657  297008 oci.go:103] Successfully created a docker volume addons-055380
	I0907 00:10:18.318755  297008 cli_runner.go:164] Run: docker run --rm --name addons-055380-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-055380 --entrypoint /usr/bin/test -v addons-055380:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0907 00:10:20.364216  297008 cli_runner.go:217] Completed: docker run --rm --name addons-055380-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-055380 --entrypoint /usr/bin/test -v addons-055380:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib: (2.045418125s)
	I0907 00:10:20.364253  297008 oci.go:107] Successfully prepared a docker volume addons-055380
	I0907 00:10:20.364283  297008 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:10:20.364306  297008 kic.go:194] Starting extracting preloaded images to volume ...
	I0907 00:10:20.364396  297008 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-055380:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0907 00:10:24.537948  297008 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-055380:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.173503129s)
	I0907 00:10:24.537980  297008 kic.go:203] duration metric: took 4.173669924s to extract preloaded images to volume ...
	W0907 00:10:24.538119  297008 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0907 00:10:24.538233  297008 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0907 00:10:24.594643  297008 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-055380 --name addons-055380 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-055380 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-055380 --network addons-055380 --ip 192.168.49.2 --volume addons-055380:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0907 00:10:24.899461  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Running}}
	I0907 00:10:24.923351  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:24.952525  297008 cli_runner.go:164] Run: docker exec addons-055380 stat /var/lib/dpkg/alternatives/iptables
	I0907 00:10:25.024423  297008 oci.go:144] the created container "addons-055380" has a running status.
	I0907 00:10:25.024457  297008 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa...
	I0907 00:10:25.570960  297008 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0907 00:10:25.600237  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:25.624982  297008 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0907 00:10:25.625004  297008 kic_runner.go:114] Args: [docker exec --privileged addons-055380 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0907 00:10:25.682089  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:25.704660  297008 machine.go:93] provisionDockerMachine start ...
	I0907 00:10:25.704764  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:25.734471  297008 main.go:141] libmachine: Using SSH client type: native
	I0907 00:10:25.734794  297008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0907 00:10:25.734804  297008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0907 00:10:25.900032  297008 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-055380
	
	I0907 00:10:25.900098  297008 ubuntu.go:182] provisioning hostname "addons-055380"
	I0907 00:10:25.900207  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:25.920394  297008 main.go:141] libmachine: Using SSH client type: native
	I0907 00:10:25.920697  297008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0907 00:10:25.920709  297008 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-055380 && echo "addons-055380" | sudo tee /etc/hostname
	I0907 00:10:26.060494  297008 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-055380
	
	I0907 00:10:26.060576  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:26.086541  297008 main.go:141] libmachine: Using SSH client type: native
	I0907 00:10:26.086881  297008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0907 00:10:26.086898  297008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-055380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-055380/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-055380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:10:26.221097  297008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:10:26.221122  297008 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21132-294391/.minikube CaCertPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21132-294391/.minikube}
	I0907 00:10:26.221157  297008 ubuntu.go:190] setting up certificates
	I0907 00:10:26.221166  297008 provision.go:84] configureAuth start
	I0907 00:10:26.221226  297008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-055380
	I0907 00:10:26.240945  297008 provision.go:143] copyHostCerts
	I0907 00:10:26.241032  297008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21132-294391/.minikube/ca.pem (1082 bytes)
	I0907 00:10:26.241149  297008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21132-294391/.minikube/cert.pem (1123 bytes)
	I0907 00:10:26.241202  297008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21132-294391/.minikube/key.pem (1675 bytes)
	I0907 00:10:26.241250  297008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21132-294391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca-key.pem org=jenkins.addons-055380 san=[127.0.0.1 192.168.49.2 addons-055380 localhost minikube]
	I0907 00:10:26.366831  297008 provision.go:177] copyRemoteCerts
	I0907 00:10:26.366896  297008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:10:26.366959  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:26.384036  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:26.473790  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:10:26.498774  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0907 00:10:26.522384  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0907 00:10:26.549430  297008 provision.go:87] duration metric: took 328.239789ms to configureAuth
	I0907 00:10:26.549456  297008 ubuntu.go:206] setting minikube options for container-runtime
	I0907 00:10:26.549646  297008 config.go:182] Loaded profile config "addons-055380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:10:26.549746  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:26.566958  297008 main.go:141] libmachine: Using SSH client type: native
	I0907 00:10:26.567269  297008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0907 00:10:26.567285  297008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:10:26.796623  297008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:10:26.796644  297008 machine.go:96] duration metric: took 1.091965536s to provisionDockerMachine
	I0907 00:10:26.796655  297008 client.go:171] duration metric: took 8.932719114s to LocalClient.Create
	I0907 00:10:26.796682  297008 start.go:167] duration metric: took 8.932798713s to libmachine.API.Create "addons-055380"
	I0907 00:10:26.796689  297008 start.go:293] postStartSetup for "addons-055380" (driver="docker")
	I0907 00:10:26.796699  297008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:10:26.796760  297008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:10:26.796800  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:26.815005  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:26.906111  297008 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:10:26.909517  297008 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0907 00:10:26.909554  297008 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0907 00:10:26.909565  297008 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0907 00:10:26.909572  297008 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0907 00:10:26.909583  297008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-294391/.minikube/addons for local assets ...
	I0907 00:10:26.909653  297008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-294391/.minikube/files for local assets ...
	I0907 00:10:26.909681  297008 start.go:296] duration metric: took 112.985317ms for postStartSetup
	I0907 00:10:26.910017  297008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-055380
	I0907 00:10:26.927393  297008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/config.json ...
	I0907 00:10:26.927688  297008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:10:26.927741  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:26.945274  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:27.034143  297008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0907 00:10:27.038940  297008 start.go:128] duration metric: took 9.178800026s to createHost
	I0907 00:10:27.038966  297008 start.go:83] releasing machines lock for "addons-055380", held for 9.178941533s
	I0907 00:10:27.039044  297008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-055380
	I0907 00:10:27.055780  297008 ssh_runner.go:195] Run: cat /version.json
	I0907 00:10:27.055840  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:27.056163  297008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:10:27.056245  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:27.075374  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:27.076292  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:27.160350  297008 ssh_runner.go:195] Run: systemctl --version
	I0907 00:10:27.292136  297008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:10:27.432597  297008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0907 00:10:27.437145  297008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:10:27.460093  297008 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0907 00:10:27.460169  297008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:10:27.497690  297008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0907 00:10:27.497716  297008 start.go:495] detecting cgroup driver to use...
	I0907 00:10:27.497750  297008 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0907 00:10:27.497801  297008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:10:27.514430  297008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:10:27.525424  297008 docker.go:218] disabling cri-docker service (if available) ...
	I0907 00:10:27.525533  297008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:10:27.539666  297008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:10:27.554905  297008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:10:27.647690  297008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:10:27.742970  297008 docker.go:234] disabling docker service ...
	I0907 00:10:27.743049  297008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:10:27.763274  297008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:10:27.774619  297008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:10:27.865016  297008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:10:27.959043  297008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:10:27.971221  297008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:10:27.988217  297008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0907 00:10:27.988311  297008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:10:27.998186  297008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:10:27.998296  297008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:10:28.018287  297008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:10:28.029369  297008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:10:28.040401  297008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:10:28.050234  297008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:10:28.061322  297008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:10:28.078036  297008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:10:28.088686  297008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:10:28.097791  297008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:10:28.107083  297008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:10:28.196841  297008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:10:28.312042  297008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:10:28.312187  297008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:10:28.316234  297008 start.go:563] Will wait 60s for crictl version
	I0907 00:10:28.316335  297008 ssh_runner.go:195] Run: which crictl
	I0907 00:10:28.319997  297008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:10:28.363263  297008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0907 00:10:28.363354  297008 ssh_runner.go:195] Run: crio --version
	I0907 00:10:28.401088  297008 ssh_runner.go:195] Run: crio --version
	I0907 00:10:28.444108  297008 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0907 00:10:28.446912  297008 cli_runner.go:164] Run: docker network inspect addons-055380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0907 00:10:28.463166  297008 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0907 00:10:28.466796  297008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:10:28.477663  297008 kubeadm.go:875] updating cluster {Name:addons-055380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-055380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0907 00:10:28.477773  297008 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:10:28.477835  297008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:10:28.560564  297008 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 00:10:28.560600  297008 crio.go:433] Images already preloaded, skipping extraction
	I0907 00:10:28.560658  297008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:10:28.600942  297008 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 00:10:28.600967  297008 cache_images.go:85] Images are preloaded, skipping loading
	I0907 00:10:28.600981  297008 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0907 00:10:28.601078  297008 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-055380 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-055380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0907 00:10:28.601164  297008 ssh_runner.go:195] Run: crio config
	I0907 00:10:28.649728  297008 cni.go:84] Creating CNI manager for ""
	I0907 00:10:28.649753  297008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0907 00:10:28.649763  297008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0907 00:10:28.649785  297008 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-055380 NodeName:addons-055380 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:10:28.649918  297008 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-055380"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:10:28.649994  297008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0907 00:10:28.659197  297008 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:10:28.659295  297008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:10:28.668351  297008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0907 00:10:28.688892  297008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:10:28.708010  297008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0907 00:10:28.727251  297008 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0907 00:10:28.730758  297008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:10:28.741910  297008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:10:28.834942  297008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0907 00:10:28.849124  297008 certs.go:68] Setting up /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380 for IP: 192.168.49.2
	I0907 00:10:28.849147  297008 certs.go:194] generating shared ca certs ...
	I0907 00:10:28.849166  297008 certs.go:226] acquiring lock for ca certs: {Name:mkf2f86d550791cd126f7b3aeff6c351ed5c0816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:28.849957  297008 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21132-294391/.minikube/ca.key
	I0907 00:10:30.333741  297008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt ...
	I0907 00:10:30.333779  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt: {Name:mkdd82430b8dda7c7a65c55e62283d80ef79b3da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:30.333978  297008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-294391/.minikube/ca.key ...
	I0907 00:10:30.333991  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/ca.key: {Name:mk99f89bc748b0f9149988ef60e26aaa3d23864b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:30.334836  297008 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.key
	I0907 00:10:31.726042  297008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.crt ...
	I0907 00:10:31.726078  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.crt: {Name:mk2aa6f9b7f9b20197c61d7e4d8cd57017ce6865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:31.726276  297008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.key ...
	I0907 00:10:31.726291  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.key: {Name:mk62b78380fa071682007c3324fd6a41b1d83b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:31.726377  297008 certs.go:256] generating profile certs ...
	I0907 00:10:31.726443  297008 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.key
	I0907 00:10:31.726465  297008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt with IP's: []
	I0907 00:10:32.735570  297008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt ...
	I0907 00:10:32.735602  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: {Name:mka9bed3e0b4667a0e37f9b0cb8f646a531aca47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:32.736499  297008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.key ...
	I0907 00:10:32.736521  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.key: {Name:mk757c6786efc9a42c08e2a4b2ff8a1e895ff611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:32.737259  297008 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.key.54529def
	I0907 00:10:32.737290  297008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.crt.54529def with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0907 00:10:33.223658  297008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.crt.54529def ...
	I0907 00:10:33.223690  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.crt.54529def: {Name:mk09498c88aaec8adf4bfdfb584c0d59fd07c4f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:33.224471  297008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.key.54529def ...
	I0907 00:10:33.224490  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.key.54529def: {Name:mk704474de9dbc2849c62deb532fb23c23f79af8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:33.225123  297008 certs.go:381] copying /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.crt.54529def -> /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.crt
	I0907 00:10:33.225218  297008 certs.go:385] copying /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.key.54529def -> /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.key
	I0907 00:10:33.225274  297008 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/proxy-client.key
	I0907 00:10:33.225297  297008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/proxy-client.crt with IP's: []
	I0907 00:10:33.783016  297008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/proxy-client.crt ...
	I0907 00:10:33.783049  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/proxy-client.crt: {Name:mk50d3693d135e449e59240606846004ac590548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:33.783822  297008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/proxy-client.key ...
	I0907 00:10:33.783841  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/proxy-client.key: {Name:mk7f63deee3abc27de0e706592af43f91d48adb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:33.784653  297008 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:10:33.784703  297008 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:10:33.784734  297008 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:10:33.784766  297008 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/key.pem (1675 bytes)
	I0907 00:10:33.785406  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:10:33.809621  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0907 00:10:33.833429  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:10:33.858324  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:10:33.883192  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0907 00:10:33.908064  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:10:33.932520  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:10:33.957372  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 00:10:33.982413  297008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:10:34.008398  297008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:10:34.028998  297008 ssh_runner.go:195] Run: openssl version
	I0907 00:10:34.034881  297008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:10:34.044758  297008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:10:34.048446  297008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  7 00:10 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:10:34.048522  297008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:10:34.055895  297008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:10:34.065724  297008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0907 00:10:34.069257  297008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0907 00:10:34.069315  297008 kubeadm.go:392] StartCluster: {Name:addons-055380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-055380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:10:34.069403  297008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:10:34.069468  297008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:10:34.110114  297008 cri.go:89] found id: ""
	I0907 00:10:34.110207  297008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:10:34.119308  297008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:10:34.128298  297008 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0907 00:10:34.128411  297008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:10:34.137758  297008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:10:34.137777  297008 kubeadm.go:157] found existing configuration files:
	
	I0907 00:10:34.137860  297008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0907 00:10:34.146849  297008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0907 00:10:34.146943  297008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0907 00:10:34.155445  297008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0907 00:10:34.164260  297008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0907 00:10:34.164353  297008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0907 00:10:34.173431  297008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0907 00:10:34.183015  297008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0907 00:10:34.183086  297008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0907 00:10:34.194319  297008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0907 00:10:34.204296  297008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0907 00:10:34.204378  297008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0907 00:10:34.213621  297008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0907 00:10:34.257441  297008 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0907 00:10:34.257616  297008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0907 00:10:34.274721  297008 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0907 00:10:34.274813  297008 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0907 00:10:34.274854  297008 kubeadm.go:310] OS: Linux
	I0907 00:10:34.274915  297008 kubeadm.go:310] CGROUPS_CPU: enabled
	I0907 00:10:34.274980  297008 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0907 00:10:34.275045  297008 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0907 00:10:34.275109  297008 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0907 00:10:34.275172  297008 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0907 00:10:34.275259  297008 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0907 00:10:34.275321  297008 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0907 00:10:34.275380  297008 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0907 00:10:34.275460  297008 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0907 00:10:34.342084  297008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 00:10:34.342203  297008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 00:10:34.342312  297008 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0907 00:10:34.349366  297008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 00:10:34.355803  297008 out.go:252]   - Generating certificates and keys ...
	I0907 00:10:34.355903  297008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0907 00:10:34.355977  297008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0907 00:10:34.692418  297008 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0907 00:10:35.412166  297008 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0907 00:10:35.675107  297008 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0907 00:10:35.940854  297008 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0907 00:10:36.067796  297008 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0907 00:10:36.068157  297008 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-055380 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0907 00:10:37.237852  297008 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0907 00:10:37.238229  297008 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-055380 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0907 00:10:37.676244  297008 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0907 00:10:37.827942  297008 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0907 00:10:37.899688  297008 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0907 00:10:37.899977  297008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 00:10:38.885368  297008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 00:10:39.021586  297008 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0907 00:10:39.758847  297008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 00:10:40.170051  297008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 00:10:40.650809  297008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 00:10:40.651633  297008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 00:10:40.655543  297008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 00:10:40.658986  297008 out.go:252]   - Booting up control plane ...
	I0907 00:10:40.659095  297008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 00:10:40.659171  297008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 00:10:40.659568  297008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 00:10:40.670056  297008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 00:10:40.670427  297008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0907 00:10:40.676914  297008 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0907 00:10:40.677443  297008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 00:10:40.677685  297008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0907 00:10:40.770621  297008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0907 00:10:40.770746  297008 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0907 00:10:42.289199  297008 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.51567676s
	I0907 00:10:42.290243  297008 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0907 00:10:42.290660  297008 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0907 00:10:42.291503  297008 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0907 00:10:42.291822  297008 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0907 00:10:45.748387  297008 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.4563551s
	I0907 00:10:47.495004  297008 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.202785024s
	I0907 00:10:48.793293  297008 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.502171444s
	I0907 00:10:48.815149  297008 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 00:10:48.829362  297008 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 00:10:48.843498  297008 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 00:10:48.843718  297008 kubeadm.go:310] [mark-control-plane] Marking the node addons-055380 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 00:10:48.856858  297008 kubeadm.go:310] [bootstrap-token] Using token: agjo8e.qzz81us1ddsoy6vd
	I0907 00:10:48.859980  297008 out.go:252]   - Configuring RBAC rules ...
	I0907 00:10:48.860117  297008 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 00:10:48.864188  297008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 00:10:48.871494  297008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 00:10:48.877767  297008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 00:10:48.882058  297008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 00:10:48.886123  297008 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 00:10:49.200947  297008 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 00:10:49.647474  297008 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0907 00:10:50.201398  297008 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0907 00:10:50.202641  297008 kubeadm.go:310] 
	I0907 00:10:50.202711  297008 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0907 00:10:50.202717  297008 kubeadm.go:310] 
	I0907 00:10:50.202790  297008 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0907 00:10:50.202795  297008 kubeadm.go:310] 
	I0907 00:10:50.202819  297008 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0907 00:10:50.202876  297008 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 00:10:50.202924  297008 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 00:10:50.202928  297008 kubeadm.go:310] 
	I0907 00:10:50.202979  297008 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0907 00:10:50.202984  297008 kubeadm.go:310] 
	I0907 00:10:50.203029  297008 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 00:10:50.203034  297008 kubeadm.go:310] 
	I0907 00:10:50.203083  297008 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0907 00:10:50.203155  297008 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 00:10:50.203221  297008 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 00:10:50.203225  297008 kubeadm.go:310] 
	I0907 00:10:50.203306  297008 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0907 00:10:50.203379  297008 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0907 00:10:50.203384  297008 kubeadm.go:310] 
	I0907 00:10:50.203464  297008 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token agjo8e.qzz81us1ddsoy6vd \
	I0907 00:10:50.203562  297008 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f09d0d7a03ad8280e5c5379592d08528a80ed324cc8775b613706c99ea8527e8 \
	I0907 00:10:50.203582  297008 kubeadm.go:310] 	--control-plane 
	I0907 00:10:50.203587  297008 kubeadm.go:310] 
	I0907 00:10:50.203667  297008 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0907 00:10:50.203672  297008 kubeadm.go:310] 
	I0907 00:10:50.203750  297008 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token agjo8e.qzz81us1ddsoy6vd \
	I0907 00:10:50.203865  297008 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f09d0d7a03ad8280e5c5379592d08528a80ed324cc8775b613706c99ea8527e8 
	I0907 00:10:50.206321  297008 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0907 00:10:50.206560  297008 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0907 00:10:50.206671  297008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 00:10:50.206693  297008 cni.go:84] Creating CNI manager for ""
	I0907 00:10:50.206709  297008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0907 00:10:50.209914  297008 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0907 00:10:50.212865  297008 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0907 00:10:50.216417  297008 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0907 00:10:50.216434  297008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0907 00:10:50.236330  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0907 00:10:50.497819  297008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:10:50.497954  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:10:50.498029  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-055380 minikube.k8s.io/updated_at=2025_09_07T00_10_50_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=196d69ba373adb3ed4fbcc87dc5d81b7f1adbb1d minikube.k8s.io/name=addons-055380 minikube.k8s.io/primary=true
	I0907 00:10:50.670941  297008 ops.go:34] apiserver oom_adj: -16
	I0907 00:10:50.671051  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:10:51.171694  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:10:51.672111  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:10:52.171976  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:10:52.671966  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:10:53.172057  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:10:53.671895  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:10:54.171860  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:10:54.671130  297008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 00:10:54.769833  297008 kubeadm.go:1105] duration metric: took 4.271925325s to wait for elevateKubeSystemPrivileges
	I0907 00:10:54.769867  297008 kubeadm.go:394] duration metric: took 20.700556377s to StartCluster
	I0907 00:10:54.769886  297008 settings.go:142] acquiring lock: {Name:mkd4385cdffa24b1b1c95580709bac830a122e89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:54.769995  297008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 00:10:54.770362  297008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/kubeconfig: {Name:mkff4b98bbe95c3fd7ed7c7c76191ddc1012e81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:10:54.770587  297008 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:10:54.770718  297008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 00:10:54.770957  297008 config.go:182] Loaded profile config "addons-055380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:10:54.770988  297008 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0907 00:10:54.771065  297008 addons.go:69] Setting yakd=true in profile "addons-055380"
	I0907 00:10:54.771083  297008 addons.go:238] Setting addon yakd=true in "addons-055380"
	I0907 00:10:54.771105  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.771594  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.772143  297008 addons.go:69] Setting inspektor-gadget=true in profile "addons-055380"
	I0907 00:10:54.772170  297008 addons.go:238] Setting addon inspektor-gadget=true in "addons-055380"
	I0907 00:10:54.772198  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.772635  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.772783  297008 addons.go:69] Setting metrics-server=true in profile "addons-055380"
	I0907 00:10:54.772797  297008 addons.go:238] Setting addon metrics-server=true in "addons-055380"
	I0907 00:10:54.772829  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.773221  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.773592  297008 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-055380"
	I0907 00:10:54.773621  297008 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-055380"
	I0907 00:10:54.773655  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.774070  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.776858  297008 addons.go:69] Setting cloud-spanner=true in profile "addons-055380"
	I0907 00:10:54.776892  297008 addons.go:238] Setting addon cloud-spanner=true in "addons-055380"
	I0907 00:10:54.776934  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.777395  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.781741  297008 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-055380"
	I0907 00:10:54.781780  297008 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-055380"
	I0907 00:10:54.781823  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.782285  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.782529  297008 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-055380"
	I0907 00:10:54.782581  297008 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-055380"
	I0907 00:10:54.782606  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.783014  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.792536  297008 addons.go:69] Setting registry=true in profile "addons-055380"
	I0907 00:10:54.792596  297008 addons.go:238] Setting addon registry=true in "addons-055380"
	I0907 00:10:54.792638  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.793360  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.807617  297008 addons.go:69] Setting default-storageclass=true in profile "addons-055380"
	I0907 00:10:54.807677  297008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-055380"
	I0907 00:10:54.808162  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.808198  297008 addons.go:69] Setting gcp-auth=true in profile "addons-055380"
	I0907 00:10:54.881468  297008 mustload.go:65] Loading cluster: addons-055380
	I0907 00:10:54.881681  297008 config.go:182] Loaded profile config "addons-055380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:10:54.881933  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.808212  297008 addons.go:69] Setting ingress=true in profile "addons-055380"
	I0907 00:10:54.911209  297008 addons.go:238] Setting addon ingress=true in "addons-055380"
	I0907 00:10:54.911260  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.911755  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.914047  297008 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0907 00:10:54.917836  297008 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0907 00:10:54.917864  297008 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0907 00:10:54.917938  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:54.808217  297008 addons.go:69] Setting ingress-dns=true in profile "addons-055380"
	I0907 00:10:54.918344  297008 addons.go:238] Setting addon ingress-dns=true in "addons-055380"
	I0907 00:10:54.918395  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.918875  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.933830  297008 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0907 00:10:54.934220  297008 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0907 00:10:54.808403  297008 out.go:179] * Verifying Kubernetes components...
	I0907 00:10:54.942365  297008 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0907 00:10:54.946960  297008 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0907 00:10:54.947121  297008 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0907 00:10:54.947621  297008 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0907 00:10:54.947681  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:54.816972  297008 addons.go:69] Setting storage-provisioner=true in profile "addons-055380"
	I0907 00:10:54.948487  297008 addons.go:238] Setting addon storage-provisioner=true in "addons-055380"
	I0907 00:10:54.948536  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.949142  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.967026  297008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0907 00:10:54.816978  297008 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-055380"
	I0907 00:10:54.971536  297008 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-055380"
	I0907 00:10:54.972504  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:54.974775  297008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0907 00:10:54.978900  297008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0907 00:10:54.983013  297008 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0907 00:10:54.983749  297008 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0907 00:10:54.983768  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0907 00:10:54.983851  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:54.848905  297008 addons.go:69] Setting volcano=true in profile "addons-055380"
	I0907 00:10:54.985829  297008 addons.go:238] Setting addon volcano=true in "addons-055380"
	I0907 00:10:54.848933  297008 addons.go:69] Setting volumesnapshots=true in profile "addons-055380"
	I0907 00:10:54.985873  297008 addons.go:238] Setting addon volumesnapshots=true in "addons-055380"
	I0907 00:10:54.985900  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:54.947176  297008 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0907 00:10:54.985954  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0907 00:10:54.986017  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:55.009662  297008 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0907 00:10:55.009690  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0907 00:10:55.009769  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:54.947183  297008 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0907 00:10:55.016250  297008 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0907 00:10:55.016283  297008 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0907 00:10:55.016393  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:54.947236  297008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:10:54.816943  297008 addons.go:69] Setting registry-creds=true in profile "addons-055380"
	I0907 00:10:55.038251  297008 addons.go:238] Setting addon registry-creds=true in "addons-055380"
	I0907 00:10:55.038317  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:55.038825  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:55.057109  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:55.057687  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:55.067387  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:55.098775  297008 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0907 00:10:55.102278  297008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0907 00:10:55.103070  297008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 00:10:55.108573  297008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0907 00:10:55.110383  297008 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0907 00:10:55.114402  297008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0907 00:10:55.117425  297008 addons.go:238] Setting addon default-storageclass=true in "addons-055380"
	I0907 00:10:55.117479  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:55.118049  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:55.150700  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:55.154911  297008 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0907 00:10:55.154936  297008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0907 00:10:55.155013  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:55.182728  297008 out.go:179]   - Using image docker.io/registry:3.0.0
	I0907 00:10:55.186900  297008 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0907 00:10:55.186926  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0907 00:10:55.187008  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:55.211812  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.212289  297008 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0907 00:10:55.215778  297008 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0907 00:10:55.240184  297008 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0907 00:10:55.240245  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (3051 bytes)
	I0907 00:10:55.240353  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:55.242825  297008 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-055380"
	I0907 00:10:55.242901  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:10:55.243488  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:10:55.274024  297008 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0907 00:10:55.277034  297008 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0907 00:10:55.280470  297008 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0907 00:10:55.280538  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0907 00:10:55.280648  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:55.301478  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.304474  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.305294  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	W0907 00:10:55.320588  297008 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0907 00:10:55.329233  297008 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:10:55.338816  297008 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:10:55.338842  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:10:55.338917  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:55.359666  297008 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0907 00:10:55.363705  297008 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0907 00:10:55.363733  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0907 00:10:55.363820  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:55.382004  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.387412  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.394745  297008 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0907 00:10:55.397633  297008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0907 00:10:55.397659  297008 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0907 00:10:55.397725  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:55.402726  297008 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:10:55.402747  297008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:10:55.402811  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:55.424029  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.462160  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.475782  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.490529  297008 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0907 00:10:55.494779  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.499653  297008 out.go:179]   - Using image docker.io/busybox:stable
	I0907 00:10:55.504406  297008 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0907 00:10:55.504427  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0907 00:10:55.504491  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:10:55.519954  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.536180  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.557529  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.559767  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:10:55.587059  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	W0907 00:10:55.588400  297008 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0907 00:10:55.588433  297008 retry.go:31] will retry after 298.704076ms: ssh: handshake failed: EOF
	I0907 00:10:55.657536  297008 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:10:55.657562  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0907 00:10:55.741845  297008 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0907 00:10:55.741910  297008 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0907 00:10:55.750734  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0907 00:10:55.827041  297008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0907 00:10:55.841969  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:10:55.882125  297008 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0907 00:10:55.882149  297008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0907 00:10:55.915941  297008 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0907 00:10:55.915965  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0907 00:10:55.945040  297008 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0907 00:10:55.945076  297008 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0907 00:10:55.953830  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0907 00:10:55.989218  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0907 00:10:55.991799  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0907 00:10:56.019542  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0907 00:10:56.028258  297008 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0907 00:10:56.028299  297008 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0907 00:10:56.046752  297008 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0907 00:10:56.046782  297008 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0907 00:10:56.057566  297008 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0907 00:10:56.057594  297008 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0907 00:10:56.065501  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0907 00:10:56.100178  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:10:56.104221  297008 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0907 00:10:56.104257  297008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0907 00:10:56.107572  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:10:56.123637  297008 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0907 00:10:56.123667  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0907 00:10:56.137802  297008 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0907 00:10:56.137841  297008 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0907 00:10:56.145304  297008 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:10:56.145331  297008 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0907 00:10:56.148488  297008 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0907 00:10:56.148518  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0907 00:10:56.290870  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0907 00:10:56.296148  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0907 00:10:56.296374  297008 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0907 00:10:56.296430  297008 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0907 00:10:56.299599  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0907 00:10:56.306420  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0907 00:10:56.309341  297008 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0907 00:10:56.309406  297008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0907 00:10:56.468616  297008 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0907 00:10:56.468693  297008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0907 00:10:56.499829  297008 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0907 00:10:56.499912  297008 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0907 00:10:56.669637  297008 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0907 00:10:56.669712  297008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0907 00:10:56.689275  297008 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0907 00:10:56.689338  297008 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0907 00:10:56.805848  297008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0907 00:10:56.805921  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0907 00:10:56.891612  297008 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0907 00:10:56.891688  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0907 00:10:56.904657  297008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0907 00:10:56.904734  297008 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0907 00:10:56.973454  297008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0907 00:10:56.973526  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0907 00:10:56.981817  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0907 00:10:57.089394  297008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0907 00:10:57.089471  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0907 00:10:57.149241  297008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0907 00:10:57.149309  297008 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0907 00:10:57.172881  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0907 00:10:59.021507  297008 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.918407721s)
	I0907 00:10:59.021589  297008 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0907 00:10:59.502544  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.751769162s)
	I0907 00:10:59.502625  297008 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.675560613s)
	I0907 00:10:59.503366  297008 node_ready.go:35] waiting up to 6m0s for node "addons-055380" to be "Ready" ...
	I0907 00:10:59.642685  297008 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-055380" context rescaled to 1 replicas
	I0907 00:11:00.037211  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.195200254s)
	W0907 00:11:00.037307  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:00.037347  297008 retry.go:31] will retry after 250.272056ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:00.037407  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.083551944s)
	I0907 00:11:00.288500  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:11:01.271819  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.282565363s)
	I0907 00:11:01.272053  297008 addons.go:479] Verifying addon ingress=true in "addons-055380"
	I0907 00:11:01.271899  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.280075213s)
	I0907 00:11:01.271908  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.252343964s)
	I0907 00:11:01.271917  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.206387451s)
	I0907 00:11:01.271924  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.171726188s)
	I0907 00:11:01.271943  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.164338893s)
	I0907 00:11:01.271975  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.972310247s)
	I0907 00:11:01.272367  297008 addons.go:479] Verifying addon metrics-server=true in "addons-055380"
	I0907 00:11:01.271984  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.965500932s)
	I0907 00:11:01.271995  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.290112827s)
	W0907 00:11:01.272469  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0907 00:11:01.272487  297008 retry.go:31] will retry after 135.274521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0907 00:11:01.272007  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.975836291s)
	I0907 00:11:01.272031  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.981088123s)
	I0907 00:11:01.272708  297008 addons.go:479] Verifying addon registry=true in "addons-055380"
	I0907 00:11:01.275479  297008 out.go:179] * Verifying ingress addon...
	I0907 00:11:01.277407  297008 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-055380 service yakd-dashboard -n yakd-dashboard
	
	I0907 00:11:01.277445  297008 out.go:179] * Verifying registry addon...
	I0907 00:11:01.280196  297008 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0907 00:11:01.282141  297008 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0907 00:11:01.335761  297008 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0907 00:11:01.335792  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:01.336061  297008 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0907 00:11:01.336078  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0907 00:11:01.363725  297008 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0907 00:11:01.408636  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0907 00:11:01.520514  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:01.792336  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:01.792594  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:01.944503  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.771516538s)
	I0907 00:11:01.944545  297008 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-055380"
	I0907 00:11:01.944741  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.656136841s)
	W0907 00:11:01.944779  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:01.944805  297008 retry.go:31] will retry after 286.937967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:01.947724  297008 out.go:179] * Verifying csi-hostpath-driver addon...
	I0907 00:11:01.951345  297008 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0907 00:11:01.971740  297008 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0907 00:11:01.971767  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:02.232372  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:11:02.286666  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:02.286869  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:02.455410  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:02.784614  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:02.786614  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:02.955392  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:03.285540  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:03.286138  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:03.454896  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:03.783343  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:03.785011  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:03.954595  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:04.007622  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:04.286710  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:04.287040  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:04.334101  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.925414668s)
	I0907 00:11:04.334179  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.101706979s)
	W0907 00:11:04.334232  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:04.334268  297008 retry.go:31] will retry after 792.586926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:04.455530  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:04.785431  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:04.785479  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:04.955189  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:05.127084  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:11:05.285226  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:05.286726  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:05.454877  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:05.786293  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:05.786437  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0907 00:11:05.931339  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:05.931375  297008 retry.go:31] will retry after 816.03667ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:05.955550  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:06.014343  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:06.284534  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:06.285543  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:06.455307  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:06.517103  297008 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0907 00:11:06.517185  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:11:06.540506  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:11:06.662392  297008 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0907 00:11:06.680255  297008 addons.go:238] Setting addon gcp-auth=true in "addons-055380"
	I0907 00:11:06.680348  297008 host.go:66] Checking if "addons-055380" exists ...
	I0907 00:11:06.680803  297008 cli_runner.go:164] Run: docker container inspect addons-055380 --format={{.State.Status}}
	I0907 00:11:06.697896  297008 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0907 00:11:06.697950  297008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-055380
	I0907 00:11:06.715211  297008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/addons-055380/id_rsa Username:docker}
	I0907 00:11:06.748106  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:11:06.786458  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:06.786859  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:06.955011  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:07.284527  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:07.286322  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:07.456231  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:07.564480  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:07.564513  297008 retry.go:31] will retry after 1.663260322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:07.568030  297008 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0907 00:11:07.570849  297008 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0907 00:11:07.573637  297008 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0907 00:11:07.573666  297008 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0907 00:11:07.592960  297008 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0907 00:11:07.592982  297008 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0907 00:11:07.612616  297008 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0907 00:11:07.612640  297008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0907 00:11:07.639611  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0907 00:11:07.783681  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:07.785696  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:07.955049  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:08.020015  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:08.135419  297008 addons.go:479] Verifying addon gcp-auth=true in "addons-055380"
	I0907 00:11:08.138436  297008 out.go:179] * Verifying gcp-auth addon...
	I0907 00:11:08.141982  297008 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0907 00:11:08.147301  297008 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0907 00:11:08.147372  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:08.283938  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:08.285800  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:08.455049  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:08.646118  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:08.784681  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:08.785742  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:08.955397  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:09.145426  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:09.228797  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:11:09.283793  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:09.285554  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:09.455333  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:09.645284  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:09.786311  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:09.787904  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:09.956051  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:10.031256  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:10.031338  297008 retry.go:31] will retry after 1.134162755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:10.145904  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:10.284135  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:10.286115  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:10.455000  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:10.506946  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:10.645457  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:10.784151  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:10.785569  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:10.954588  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:11.151166  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:11.166231  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:11:11.283921  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:11.286378  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:11.454392  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:11.645702  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:11.784262  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:11.797111  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:11.955900  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:11.973152  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:11.973188  297008 retry.go:31] will retry after 1.941257176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:12.145140  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:12.284349  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:12.285483  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:12.454215  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:12.507201  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:12.645886  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:12.783732  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:12.785672  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:12.954592  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:13.145193  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:13.283380  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:13.285536  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:13.455424  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:13.645769  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:13.783696  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:13.786084  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:13.915276  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:11:13.954328  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:14.144898  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:14.285033  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:14.285380  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:14.453982  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:14.645952  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0907 00:11:14.720791  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:14.720838  297008 retry.go:31] will retry after 4.997485637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:14.783921  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:14.786642  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:14.954918  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:15.010469  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:15.145407  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:15.284093  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:15.285539  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:15.454578  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:15.645374  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:15.783634  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:15.786038  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:15.954548  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:16.145333  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:16.284103  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:16.285578  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:16.454430  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:16.645668  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:16.785513  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:16.786301  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:16.954156  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:17.145729  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:17.284783  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:17.285689  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:17.454953  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:17.506656  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:17.645555  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:17.783503  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:17.785486  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:17.954104  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:18.145630  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:18.283615  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:18.285341  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:18.454257  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:18.645384  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:18.783748  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:18.786409  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:18.954740  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:19.145451  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:19.285508  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:19.285656  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:19.454480  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:19.645576  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:19.718729  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:11:19.790578  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:19.790951  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:19.955806  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:20.007134  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:20.145137  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:20.288175  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:20.289326  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:20.456457  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:20.527506  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:20.527539  297008 retry.go:31] will retry after 5.582180992s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:20.645662  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:20.783490  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:20.785996  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:20.955084  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:21.150721  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:21.284030  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:21.285831  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:21.454670  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:21.645353  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:21.783141  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:21.785595  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:21.954670  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:22.007452  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:22.145269  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:22.283299  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:22.285251  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:22.455327  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:22.646765  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:22.783872  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:22.785667  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:22.954736  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:23.144780  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:23.283921  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:23.285657  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:23.454729  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:23.645474  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:23.783429  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:23.785571  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:23.954422  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:24.148906  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:24.284978  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:24.285503  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:24.454597  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:24.506805  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:24.646034  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:24.785437  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:24.785488  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:24.954524  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:25.145662  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:25.283568  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:25.285707  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:25.454520  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:25.644935  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:25.784927  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:25.785385  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:25.954596  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:26.110181  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:11:26.145381  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:26.284148  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:26.286580  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:26.454546  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:26.507539  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:26.645740  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:26.783723  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:26.787383  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0907 00:11:26.911743  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:26.911832  297008 retry.go:31] will retry after 13.043906909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:26.954668  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:27.145530  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:27.284006  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:27.285891  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:27.454838  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:27.645590  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:27.785695  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:27.786258  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:27.955041  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:28.145782  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:28.283841  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:28.285794  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:28.454698  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:28.645592  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:28.783540  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:28.785973  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:28.954842  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:29.006613  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:29.145428  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:29.283215  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:29.285495  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:29.454445  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:29.646516  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:29.784201  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:29.786777  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:29.955691  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:30.145930  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:30.284210  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:30.285703  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:30.454754  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:30.645190  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:30.784796  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:30.785489  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:30.954464  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:31.008745  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:31.146080  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:31.283082  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:31.285327  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:31.454689  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:31.645687  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:31.784399  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:31.786681  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:31.954619  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:32.145457  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:32.284743  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:32.285959  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:32.456877  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:32.645407  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:32.783617  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:32.786219  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:32.955263  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:33.010822  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:33.145884  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:33.285325  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:33.286442  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:33.455339  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:33.645933  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:33.784371  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:33.786059  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:33.955197  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:34.145514  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:34.284530  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:34.285674  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:34.454457  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:34.646143  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:34.784124  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:34.786449  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:34.954667  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:35.145375  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:35.283649  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:35.284852  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:35.455027  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:35.506848  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:35.644740  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:35.785666  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:35.785904  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:35.955155  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:36.145737  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:36.284075  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:36.286238  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:36.454183  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:36.644742  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:36.783864  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:36.785879  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:36.955157  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:37.145506  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:37.283709  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:37.285804  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:37.454994  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:37.644778  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:37.783782  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:37.785571  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:37.955257  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0907 00:11:38.007742  297008 node_ready.go:57] node "addons-055380" has "Ready":"False" status (will retry)
	I0907 00:11:38.145550  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:38.284206  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:38.285376  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:38.454127  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:38.644885  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:38.786029  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:38.786866  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:39.002918  297008 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0907 00:11:39.002957  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:39.028725  297008 node_ready.go:49] node "addons-055380" is "Ready"
	I0907 00:11:39.028773  297008 node_ready.go:38] duration metric: took 39.525346207s for node "addons-055380" to be "Ready" ...
	I0907 00:11:39.028804  297008 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:11:39.028898  297008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:11:39.052886  297008 api_server.go:72] duration metric: took 44.282261749s to wait for apiserver process to appear ...
	I0907 00:11:39.052934  297008 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:11:39.052954  297008 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0907 00:11:39.068547  297008 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0907 00:11:39.075527  297008 api_server.go:141] control plane version: v1.34.0
	I0907 00:11:39.075558  297008 api_server.go:131] duration metric: took 22.616659ms to wait for apiserver health ...
	I0907 00:11:39.075575  297008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:11:39.093716  297008 system_pods.go:59] 19 kube-system pods found
	I0907 00:11:39.093755  297008 system_pods.go:61] "coredns-66bc5c9577-d5ljr" [336b52e3-bd05-4c67-9d69-9df92167ee62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:11:39.093762  297008 system_pods.go:61] "csi-hostpath-attacher-0" [e4151ad5-5884-4a8d-a4c1-3e3d4bd26b07] Pending
	I0907 00:11:39.093769  297008 system_pods.go:61] "csi-hostpath-resizer-0" [3ac6a254-bbf2-4363-940f-ef90c3de695e] Pending
	I0907 00:11:39.093781  297008 system_pods.go:61] "csi-hostpathplugin-4zcxn" [7abfb475-4277-4c28-b5d8-910459c17593] Pending
	I0907 00:11:39.093802  297008 system_pods.go:61] "etcd-addons-055380" [1c4e6b82-1674-4067-b0b4-e585e79468b8] Running
	I0907 00:11:39.093821  297008 system_pods.go:61] "kindnet-l24xr" [8c0f41ee-e8e9-4845-9263-3639ae33e393] Running
	I0907 00:11:39.093826  297008 system_pods.go:61] "kube-apiserver-addons-055380" [3db1dace-147f-466b-8c5b-eb3424792882] Running
	I0907 00:11:39.093837  297008 system_pods.go:61] "kube-controller-manager-addons-055380" [6714bf05-934f-4fe7-b993-88c21e400dbc] Running
	I0907 00:11:39.093843  297008 system_pods.go:61] "kube-ingress-dns-minikube" [05aaffeb-ab50-464f-912b-21954b9dc508] Pending
	I0907 00:11:39.093861  297008 system_pods.go:61] "kube-proxy-g28wn" [aeb98127-87f1-4b9b-8bf0-ce9600f8e439] Running
	I0907 00:11:39.093873  297008 system_pods.go:61] "kube-scheduler-addons-055380" [ab5a0e94-3228-4814-8331-74a271469221] Running
	I0907 00:11:39.093877  297008 system_pods.go:61] "metrics-server-85b7d694d7-ph552" [effa6dcf-0552-42c4-946c-35e72de94049] Pending
	I0907 00:11:39.093890  297008 system_pods.go:61] "nvidia-device-plugin-daemonset-9gjct" [a67de55a-d27c-42fd-9d05-9a24e7e106de] Pending
	I0907 00:11:39.093905  297008 system_pods.go:61] "registry-66898fdd98-z9b2q" [346ca7f2-e990-422f-8fd9-273339c464ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0907 00:11:39.093910  297008 system_pods.go:61] "registry-creds-764b6fb674-5qt6r" [1a83235d-09c3-465f-8be2-a73258b361aa] Pending
	I0907 00:11:39.093915  297008 system_pods.go:61] "registry-proxy-8hhc9" [a68b67f2-7643-4e0c-b24a-2330ae7e633b] Pending
	I0907 00:11:39.093937  297008 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8b2x2" [0ba6214e-a17c-4848-a483-892214b9239f] Pending
	I0907 00:11:39.093952  297008 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gntk2" [3b11de96-7360-493a-890e-d370208a8cc8] Pending
	I0907 00:11:39.093963  297008 system_pods.go:61] "storage-provisioner" [395e2fd6-2a90-4463-9d20-803306697487] Pending
	I0907 00:11:39.093970  297008 system_pods.go:74] duration metric: took 18.388132ms to wait for pod list to return data ...
	I0907 00:11:39.093982  297008 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:11:39.097162  297008 default_sa.go:45] found service account: "default"
	I0907 00:11:39.097201  297008 default_sa.go:55] duration metric: took 3.21229ms for default service account to be created ...
	I0907 00:11:39.097212  297008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:11:39.107991  297008 system_pods.go:86] 19 kube-system pods found
	I0907 00:11:39.108027  297008 system_pods.go:89] "coredns-66bc5c9577-d5ljr" [336b52e3-bd05-4c67-9d69-9df92167ee62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:11:39.108044  297008 system_pods.go:89] "csi-hostpath-attacher-0" [e4151ad5-5884-4a8d-a4c1-3e3d4bd26b07] Pending
	I0907 00:11:39.108071  297008 system_pods.go:89] "csi-hostpath-resizer-0" [3ac6a254-bbf2-4363-940f-ef90c3de695e] Pending
	I0907 00:11:39.108083  297008 system_pods.go:89] "csi-hostpathplugin-4zcxn" [7abfb475-4277-4c28-b5d8-910459c17593] Pending
	I0907 00:11:39.108088  297008 system_pods.go:89] "etcd-addons-055380" [1c4e6b82-1674-4067-b0b4-e585e79468b8] Running
	I0907 00:11:39.108093  297008 system_pods.go:89] "kindnet-l24xr" [8c0f41ee-e8e9-4845-9263-3639ae33e393] Running
	I0907 00:11:39.108103  297008 system_pods.go:89] "kube-apiserver-addons-055380" [3db1dace-147f-466b-8c5b-eb3424792882] Running
	I0907 00:11:39.108107  297008 system_pods.go:89] "kube-controller-manager-addons-055380" [6714bf05-934f-4fe7-b993-88c21e400dbc] Running
	I0907 00:11:39.108129  297008 system_pods.go:89] "kube-ingress-dns-minikube" [05aaffeb-ab50-464f-912b-21954b9dc508] Pending
	I0907 00:11:39.108139  297008 system_pods.go:89] "kube-proxy-g28wn" [aeb98127-87f1-4b9b-8bf0-ce9600f8e439] Running
	I0907 00:11:39.108144  297008 system_pods.go:89] "kube-scheduler-addons-055380" [ab5a0e94-3228-4814-8331-74a271469221] Running
	I0907 00:11:39.108157  297008 system_pods.go:89] "metrics-server-85b7d694d7-ph552" [effa6dcf-0552-42c4-946c-35e72de94049] Pending
	I0907 00:11:39.108170  297008 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjct" [a67de55a-d27c-42fd-9d05-9a24e7e106de] Pending
	I0907 00:11:39.108177  297008 system_pods.go:89] "registry-66898fdd98-z9b2q" [346ca7f2-e990-422f-8fd9-273339c464ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0907 00:11:39.108199  297008 system_pods.go:89] "registry-creds-764b6fb674-5qt6r" [1a83235d-09c3-465f-8be2-a73258b361aa] Pending
	I0907 00:11:39.108210  297008 system_pods.go:89] "registry-proxy-8hhc9" [a68b67f2-7643-4e0c-b24a-2330ae7e633b] Pending
	I0907 00:11:39.108214  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8b2x2" [0ba6214e-a17c-4848-a483-892214b9239f] Pending
	I0907 00:11:39.108217  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gntk2" [3b11de96-7360-493a-890e-d370208a8cc8] Pending
	I0907 00:11:39.108231  297008 system_pods.go:89] "storage-provisioner" [395e2fd6-2a90-4463-9d20-803306697487] Pending
	I0907 00:11:39.108252  297008 retry.go:31] will retry after 288.33374ms: missing components: kube-dns
	I0907 00:11:39.191530  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:39.379687  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:39.380178  297008 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0907 00:11:39.380192  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:39.449375  297008 system_pods.go:86] 19 kube-system pods found
	I0907 00:11:39.449410  297008 system_pods.go:89] "coredns-66bc5c9577-d5ljr" [336b52e3-bd05-4c67-9d69-9df92167ee62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:11:39.449417  297008 system_pods.go:89] "csi-hostpath-attacher-0" [e4151ad5-5884-4a8d-a4c1-3e3d4bd26b07] Pending
	I0907 00:11:39.449423  297008 system_pods.go:89] "csi-hostpath-resizer-0" [3ac6a254-bbf2-4363-940f-ef90c3de695e] Pending
	I0907 00:11:39.449427  297008 system_pods.go:89] "csi-hostpathplugin-4zcxn" [7abfb475-4277-4c28-b5d8-910459c17593] Pending
	I0907 00:11:39.449431  297008 system_pods.go:89] "etcd-addons-055380" [1c4e6b82-1674-4067-b0b4-e585e79468b8] Running
	I0907 00:11:39.449462  297008 system_pods.go:89] "kindnet-l24xr" [8c0f41ee-e8e9-4845-9263-3639ae33e393] Running
	I0907 00:11:39.449468  297008 system_pods.go:89] "kube-apiserver-addons-055380" [3db1dace-147f-466b-8c5b-eb3424792882] Running
	I0907 00:11:39.449472  297008 system_pods.go:89] "kube-controller-manager-addons-055380" [6714bf05-934f-4fe7-b993-88c21e400dbc] Running
	I0907 00:11:39.449479  297008 system_pods.go:89] "kube-ingress-dns-minikube" [05aaffeb-ab50-464f-912b-21954b9dc508] Pending
	I0907 00:11:39.449490  297008 system_pods.go:89] "kube-proxy-g28wn" [aeb98127-87f1-4b9b-8bf0-ce9600f8e439] Running
	I0907 00:11:39.449494  297008 system_pods.go:89] "kube-scheduler-addons-055380" [ab5a0e94-3228-4814-8331-74a271469221] Running
	I0907 00:11:39.449501  297008 system_pods.go:89] "metrics-server-85b7d694d7-ph552" [effa6dcf-0552-42c4-946c-35e72de94049] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:11:39.449506  297008 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjct" [a67de55a-d27c-42fd-9d05-9a24e7e106de] Pending
	I0907 00:11:39.449519  297008 system_pods.go:89] "registry-66898fdd98-z9b2q" [346ca7f2-e990-422f-8fd9-273339c464ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0907 00:11:39.449548  297008 system_pods.go:89] "registry-creds-764b6fb674-5qt6r" [1a83235d-09c3-465f-8be2-a73258b361aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0907 00:11:39.449553  297008 system_pods.go:89] "registry-proxy-8hhc9" [a68b67f2-7643-4e0c-b24a-2330ae7e633b] Pending
	I0907 00:11:39.449558  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8b2x2" [0ba6214e-a17c-4848-a483-892214b9239f] Pending
	I0907 00:11:39.449562  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gntk2" [3b11de96-7360-493a-890e-d370208a8cc8] Pending
	I0907 00:11:39.449567  297008 system_pods.go:89] "storage-provisioner" [395e2fd6-2a90-4463-9d20-803306697487] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:11:39.449592  297008 retry.go:31] will retry after 238.383054ms: missing components: kube-dns
	I0907 00:11:39.478280  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:39.663211  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:39.718497  297008 system_pods.go:86] 19 kube-system pods found
	I0907 00:11:39.718532  297008 system_pods.go:89] "coredns-66bc5c9577-d5ljr" [336b52e3-bd05-4c67-9d69-9df92167ee62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:11:39.718541  297008 system_pods.go:89] "csi-hostpath-attacher-0" [e4151ad5-5884-4a8d-a4c1-3e3d4bd26b07] Pending
	I0907 00:11:39.718547  297008 system_pods.go:89] "csi-hostpath-resizer-0" [3ac6a254-bbf2-4363-940f-ef90c3de695e] Pending
	I0907 00:11:39.718551  297008 system_pods.go:89] "csi-hostpathplugin-4zcxn" [7abfb475-4277-4c28-b5d8-910459c17593] Pending
	I0907 00:11:39.718562  297008 system_pods.go:89] "etcd-addons-055380" [1c4e6b82-1674-4067-b0b4-e585e79468b8] Running
	I0907 00:11:39.718567  297008 system_pods.go:89] "kindnet-l24xr" [8c0f41ee-e8e9-4845-9263-3639ae33e393] Running
	I0907 00:11:39.718571  297008 system_pods.go:89] "kube-apiserver-addons-055380" [3db1dace-147f-466b-8c5b-eb3424792882] Running
	I0907 00:11:39.718575  297008 system_pods.go:89] "kube-controller-manager-addons-055380" [6714bf05-934f-4fe7-b993-88c21e400dbc] Running
	I0907 00:11:39.718582  297008 system_pods.go:89] "kube-ingress-dns-minikube" [05aaffeb-ab50-464f-912b-21954b9dc508] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0907 00:11:39.718586  297008 system_pods.go:89] "kube-proxy-g28wn" [aeb98127-87f1-4b9b-8bf0-ce9600f8e439] Running
	I0907 00:11:39.718591  297008 system_pods.go:89] "kube-scheduler-addons-055380" [ab5a0e94-3228-4814-8331-74a271469221] Running
	I0907 00:11:39.718597  297008 system_pods.go:89] "metrics-server-85b7d694d7-ph552" [effa6dcf-0552-42c4-946c-35e72de94049] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:11:39.718621  297008 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjct" [a67de55a-d27c-42fd-9d05-9a24e7e106de] Pending
	I0907 00:11:39.718643  297008 system_pods.go:89] "registry-66898fdd98-z9b2q" [346ca7f2-e990-422f-8fd9-273339c464ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0907 00:11:39.718651  297008 system_pods.go:89] "registry-creds-764b6fb674-5qt6r" [1a83235d-09c3-465f-8be2-a73258b361aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0907 00:11:39.718664  297008 system_pods.go:89] "registry-proxy-8hhc9" [a68b67f2-7643-4e0c-b24a-2330ae7e633b] Pending
	I0907 00:11:39.718669  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8b2x2" [0ba6214e-a17c-4848-a483-892214b9239f] Pending
	I0907 00:11:39.718673  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gntk2" [3b11de96-7360-493a-890e-d370208a8cc8] Pending
	I0907 00:11:39.718679  297008 system_pods.go:89] "storage-provisioner" [395e2fd6-2a90-4463-9d20-803306697487] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:11:39.718723  297008 retry.go:31] will retry after 467.169834ms: missing components: kube-dns
	I0907 00:11:39.863468  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:39.863968  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:39.956505  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:11:40.017780  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:40.158093  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:40.219093  297008 system_pods.go:86] 19 kube-system pods found
	I0907 00:11:40.219134  297008 system_pods.go:89] "coredns-66bc5c9577-d5ljr" [336b52e3-bd05-4c67-9d69-9df92167ee62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:11:40.219144  297008 system_pods.go:89] "csi-hostpath-attacher-0" [e4151ad5-5884-4a8d-a4c1-3e3d4bd26b07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0907 00:11:40.219151  297008 system_pods.go:89] "csi-hostpath-resizer-0" [3ac6a254-bbf2-4363-940f-ef90c3de695e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0907 00:11:40.219157  297008 system_pods.go:89] "csi-hostpathplugin-4zcxn" [7abfb475-4277-4c28-b5d8-910459c17593] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0907 00:11:40.219162  297008 system_pods.go:89] "etcd-addons-055380" [1c4e6b82-1674-4067-b0b4-e585e79468b8] Running
	I0907 00:11:40.219172  297008 system_pods.go:89] "kindnet-l24xr" [8c0f41ee-e8e9-4845-9263-3639ae33e393] Running
	I0907 00:11:40.219176  297008 system_pods.go:89] "kube-apiserver-addons-055380" [3db1dace-147f-466b-8c5b-eb3424792882] Running
	I0907 00:11:40.219188  297008 system_pods.go:89] "kube-controller-manager-addons-055380" [6714bf05-934f-4fe7-b993-88c21e400dbc] Running
	I0907 00:11:40.219210  297008 system_pods.go:89] "kube-ingress-dns-minikube" [05aaffeb-ab50-464f-912b-21954b9dc508] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0907 00:11:40.219214  297008 system_pods.go:89] "kube-proxy-g28wn" [aeb98127-87f1-4b9b-8bf0-ce9600f8e439] Running
	I0907 00:11:40.219219  297008 system_pods.go:89] "kube-scheduler-addons-055380" [ab5a0e94-3228-4814-8331-74a271469221] Running
	I0907 00:11:40.219225  297008 system_pods.go:89] "metrics-server-85b7d694d7-ph552" [effa6dcf-0552-42c4-946c-35e72de94049] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:11:40.219237  297008 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjct" [a67de55a-d27c-42fd-9d05-9a24e7e106de] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0907 00:11:40.219245  297008 system_pods.go:89] "registry-66898fdd98-z9b2q" [346ca7f2-e990-422f-8fd9-273339c464ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0907 00:11:40.219251  297008 system_pods.go:89] "registry-creds-764b6fb674-5qt6r" [1a83235d-09c3-465f-8be2-a73258b361aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0907 00:11:40.219260  297008 system_pods.go:89] "registry-proxy-8hhc9" [a68b67f2-7643-4e0c-b24a-2330ae7e633b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0907 00:11:40.219274  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8b2x2" [0ba6214e-a17c-4848-a483-892214b9239f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0907 00:11:40.219287  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gntk2" [3b11de96-7360-493a-890e-d370208a8cc8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0907 00:11:40.219294  297008 system_pods.go:89] "storage-provisioner" [395e2fd6-2a90-4463-9d20-803306697487] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:11:40.219316  297008 retry.go:31] will retry after 469.30422ms: missing components: kube-dns
	I0907 00:11:40.307327  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:40.308004  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:40.455503  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:40.665907  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:40.692697  297008 system_pods.go:86] 19 kube-system pods found
	I0907 00:11:40.692741  297008 system_pods.go:89] "coredns-66bc5c9577-d5ljr" [336b52e3-bd05-4c67-9d69-9df92167ee62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:11:40.692750  297008 system_pods.go:89] "csi-hostpath-attacher-0" [e4151ad5-5884-4a8d-a4c1-3e3d4bd26b07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0907 00:11:40.692758  297008 system_pods.go:89] "csi-hostpath-resizer-0" [3ac6a254-bbf2-4363-940f-ef90c3de695e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0907 00:11:40.692765  297008 system_pods.go:89] "csi-hostpathplugin-4zcxn" [7abfb475-4277-4c28-b5d8-910459c17593] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0907 00:11:40.692770  297008 system_pods.go:89] "etcd-addons-055380" [1c4e6b82-1674-4067-b0b4-e585e79468b8] Running
	I0907 00:11:40.692775  297008 system_pods.go:89] "kindnet-l24xr" [8c0f41ee-e8e9-4845-9263-3639ae33e393] Running
	I0907 00:11:40.692779  297008 system_pods.go:89] "kube-apiserver-addons-055380" [3db1dace-147f-466b-8c5b-eb3424792882] Running
	I0907 00:11:40.692783  297008 system_pods.go:89] "kube-controller-manager-addons-055380" [6714bf05-934f-4fe7-b993-88c21e400dbc] Running
	I0907 00:11:40.692802  297008 system_pods.go:89] "kube-ingress-dns-minikube" [05aaffeb-ab50-464f-912b-21954b9dc508] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0907 00:11:40.692824  297008 system_pods.go:89] "kube-proxy-g28wn" [aeb98127-87f1-4b9b-8bf0-ce9600f8e439] Running
	I0907 00:11:40.692830  297008 system_pods.go:89] "kube-scheduler-addons-055380" [ab5a0e94-3228-4814-8331-74a271469221] Running
	I0907 00:11:40.692838  297008 system_pods.go:89] "metrics-server-85b7d694d7-ph552" [effa6dcf-0552-42c4-946c-35e72de94049] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:11:40.692848  297008 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjct" [a67de55a-d27c-42fd-9d05-9a24e7e106de] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0907 00:11:40.692856  297008 system_pods.go:89] "registry-66898fdd98-z9b2q" [346ca7f2-e990-422f-8fd9-273339c464ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0907 00:11:40.692874  297008 system_pods.go:89] "registry-creds-764b6fb674-5qt6r" [1a83235d-09c3-465f-8be2-a73258b361aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0907 00:11:40.692884  297008 system_pods.go:89] "registry-proxy-8hhc9" [a68b67f2-7643-4e0c-b24a-2330ae7e633b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0907 00:11:40.692890  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8b2x2" [0ba6214e-a17c-4848-a483-892214b9239f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0907 00:11:40.692896  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gntk2" [3b11de96-7360-493a-890e-d370208a8cc8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0907 00:11:40.692908  297008 system_pods.go:89] "storage-provisioner" [395e2fd6-2a90-4463-9d20-803306697487] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:11:40.692925  297008 retry.go:31] will retry after 652.779889ms: missing components: kube-dns
	I0907 00:11:40.807140  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:40.829869  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:41.022541  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:41.150853  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:41.284249  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:41.286607  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:41.352370  297008 system_pods.go:86] 19 kube-system pods found
	I0907 00:11:41.352401  297008 system_pods.go:89] "coredns-66bc5c9577-d5ljr" [336b52e3-bd05-4c67-9d69-9df92167ee62] Running
	I0907 00:11:41.352413  297008 system_pods.go:89] "csi-hostpath-attacher-0" [e4151ad5-5884-4a8d-a4c1-3e3d4bd26b07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0907 00:11:41.352420  297008 system_pods.go:89] "csi-hostpath-resizer-0" [3ac6a254-bbf2-4363-940f-ef90c3de695e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0907 00:11:41.352430  297008 system_pods.go:89] "csi-hostpathplugin-4zcxn" [7abfb475-4277-4c28-b5d8-910459c17593] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0907 00:11:41.352443  297008 system_pods.go:89] "etcd-addons-055380" [1c4e6b82-1674-4067-b0b4-e585e79468b8] Running
	I0907 00:11:41.352450  297008 system_pods.go:89] "kindnet-l24xr" [8c0f41ee-e8e9-4845-9263-3639ae33e393] Running
	I0907 00:11:41.352455  297008 system_pods.go:89] "kube-apiserver-addons-055380" [3db1dace-147f-466b-8c5b-eb3424792882] Running
	I0907 00:11:41.352459  297008 system_pods.go:89] "kube-controller-manager-addons-055380" [6714bf05-934f-4fe7-b993-88c21e400dbc] Running
	I0907 00:11:41.352472  297008 system_pods.go:89] "kube-ingress-dns-minikube" [05aaffeb-ab50-464f-912b-21954b9dc508] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0907 00:11:41.352477  297008 system_pods.go:89] "kube-proxy-g28wn" [aeb98127-87f1-4b9b-8bf0-ce9600f8e439] Running
	I0907 00:11:41.352482  297008 system_pods.go:89] "kube-scheduler-addons-055380" [ab5a0e94-3228-4814-8331-74a271469221] Running
	I0907 00:11:41.352493  297008 system_pods.go:89] "metrics-server-85b7d694d7-ph552" [effa6dcf-0552-42c4-946c-35e72de94049] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0907 00:11:41.352499  297008 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjct" [a67de55a-d27c-42fd-9d05-9a24e7e106de] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0907 00:11:41.352518  297008 system_pods.go:89] "registry-66898fdd98-z9b2q" [346ca7f2-e990-422f-8fd9-273339c464ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0907 00:11:41.352527  297008 system_pods.go:89] "registry-creds-764b6fb674-5qt6r" [1a83235d-09c3-465f-8be2-a73258b361aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0907 00:11:41.352535  297008 system_pods.go:89] "registry-proxy-8hhc9" [a68b67f2-7643-4e0c-b24a-2330ae7e633b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0907 00:11:41.352541  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8b2x2" [0ba6214e-a17c-4848-a483-892214b9239f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0907 00:11:41.352553  297008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gntk2" [3b11de96-7360-493a-890e-d370208a8cc8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0907 00:11:41.352558  297008 system_pods.go:89] "storage-provisioner" [395e2fd6-2a90-4463-9d20-803306697487] Running
	I0907 00:11:41.352566  297008 system_pods.go:126] duration metric: took 2.255348352s to wait for k8s-apps to be running ...
	I0907 00:11:41.352578  297008 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:11:41.352643  297008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:11:41.404841  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.448254864s)
	W0907 00:11:41.404878  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:41.404896  297008 retry.go:31] will retry after 16.876212735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:41.405004  297008 system_svc.go:56] duration metric: took 52.422144ms WaitForService to wait for kubelet
	I0907 00:11:41.405022  297008 kubeadm.go:578] duration metric: took 46.634403499s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:11:41.405042  297008 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:11:41.408890  297008 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0907 00:11:41.408930  297008 node_conditions.go:123] node cpu capacity is 2
	I0907 00:11:41.408943  297008 node_conditions.go:105] duration metric: took 3.895551ms to run NodePressure ...
	I0907 00:11:41.408956  297008 start.go:241] waiting for startup goroutines ...
	I0907 00:11:41.455286  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:41.645405  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:41.787243  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:41.789647  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:41.955274  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:42.145570  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:42.284092  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:42.286296  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:42.455144  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:42.645679  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:42.789008  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:42.789346  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:42.989480  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:43.145293  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:43.284198  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:43.286639  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:43.456083  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:43.646073  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:43.784007  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:43.787364  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:43.955117  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:44.145160  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:44.286308  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:44.287012  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:44.458588  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:44.645692  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:44.783713  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:44.786029  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:44.955274  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:45.147373  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:45.289585  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:45.290814  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:45.455584  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:45.645335  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:45.783520  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:45.786638  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:45.955049  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:46.145076  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:46.284367  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:46.285275  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:46.454879  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:46.644779  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:46.788614  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:46.788932  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:46.957751  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:47.145811  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:47.285603  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:47.289317  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:47.455292  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:47.645939  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:47.789089  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:47.789815  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:47.956093  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:48.145285  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:48.283704  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:48.286035  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:48.455864  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:48.647066  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:48.794537  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:48.794965  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:48.956281  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:49.145865  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:49.285929  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:49.287817  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:49.456125  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:49.644961  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:49.784221  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:49.786922  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:49.955350  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:50.147779  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:50.283678  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:50.285885  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:50.456506  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:50.646148  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:50.783071  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:50.785637  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:50.965618  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:51.152270  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:51.286121  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:51.286728  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:51.456312  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:51.646651  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:51.788035  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:51.789277  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:51.966678  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:52.147600  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:52.289580  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:52.290015  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:52.458080  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:52.645839  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:52.787844  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:52.788207  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:52.965826  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:53.153060  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:53.291991  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:53.292426  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:53.459559  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:53.645660  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:53.789749  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:53.792067  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:53.957000  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:54.158483  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:54.284928  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:54.286133  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:54.455809  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:54.645033  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:54.783283  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:54.785205  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:54.957945  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:55.148656  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:55.286377  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:55.287789  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:55.455743  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:55.648235  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:55.786608  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:55.788911  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:55.955599  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:56.145994  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:56.285174  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:56.285287  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:56.455137  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:56.645233  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:56.783924  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:56.788469  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:56.957055  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:57.147058  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:57.288757  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:57.290334  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:57.455526  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:57.649455  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:57.786112  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:57.786546  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:57.955059  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:58.145112  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:58.281336  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:11:58.286903  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:58.287121  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:58.455223  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:58.645086  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:58.798295  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:58.799087  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:58.955695  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:59.147204  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:59.291274  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:59.292702  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:59.451525  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.170149812s)
	W0907 00:11:59.451568  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:59.451588  297008 retry.go:31] will retry after 12.364300442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:11:59.454935  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:11:59.645300  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:11:59.783816  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:11:59.786237  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:11:59.954357  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:00.154389  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:00.326528  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:00.326684  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:12:00.460035  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:00.645468  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:00.784325  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:00.786273  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:12:00.955074  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:01.145447  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:01.283987  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:01.286527  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:12:01.454675  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:01.646037  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:01.784261  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:01.785411  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:12:01.955386  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:02.145341  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:02.283496  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:02.285965  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:12:02.458142  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:02.646800  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:02.785610  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:12:02.786274  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:02.955287  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:03.145231  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:03.286660  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:03.296247  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:12:03.455476  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:03.644938  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:03.784796  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0907 00:12:03.785881  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:03.957853  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:04.145173  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:04.286682  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:04.286705  297008 kapi.go:107] duration metric: took 1m3.004565705s to wait for kubernetes.io/minikube-addons=registry ...
	I0907 00:12:04.458995  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:04.648564  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:04.784430  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:04.956067  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:05.145689  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:05.285022  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:05.456018  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:05.648614  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:05.784133  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:05.955901  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:06.146551  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:06.284806  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:06.457256  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:06.646524  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:06.783906  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:06.955308  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:07.146460  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:07.284231  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:07.454853  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:07.645739  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:07.783979  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:07.955909  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:08.144940  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:08.284096  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:08.456430  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:08.648143  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:08.785986  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:08.957195  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:09.145415  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:09.284443  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:09.462712  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:09.649901  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:09.784590  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:09.956126  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:10.146034  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:10.283878  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:10.456998  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:10.647279  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:10.784345  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:10.962745  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:11.147026  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:11.283216  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:11.454335  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:11.647314  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:11.784005  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:11.816342  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:12:11.965431  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:12.149313  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:12.289542  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:12.455429  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:12.666992  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:12.786180  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:12.966780  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:13.145550  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:13.169951  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.353562738s)
	W0907 00:12:13.170250  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:12:13.170288  297008 retry.go:31] will retry after 40.346850714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0907 00:12:13.287975  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:13.457977  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:13.657163  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:13.797261  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:13.966295  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:14.155950  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:14.286993  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:14.455819  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:14.654174  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:14.784347  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:14.955186  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:15.147277  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:15.283713  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:15.455110  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:15.645312  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:15.783809  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:15.955654  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:16.146077  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:16.283375  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:16.455025  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:16.645207  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:16.784471  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:16.954843  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:17.144923  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:17.284263  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:17.455812  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:17.644950  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:17.784219  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:17.955352  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:18.146166  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:18.283253  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:18.454932  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:18.645598  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:18.783878  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:18.955606  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:19.146081  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:19.284157  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:19.458448  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:19.646045  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:19.784224  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:19.956836  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:20.145036  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:20.283021  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:20.455157  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:20.645855  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:20.784332  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:20.954390  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:21.150301  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:21.339271  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:21.485224  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:21.652482  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:21.783398  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:21.954876  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:22.144881  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:22.283906  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:22.456014  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:22.645591  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:22.789877  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:22.955957  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:23.145667  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:23.285789  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:23.455278  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:23.645035  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:23.783206  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:23.955782  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:24.145277  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:24.283425  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:24.455934  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:24.645263  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:24.784024  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:24.955232  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:25.145477  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:25.286029  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:25.455189  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:25.655778  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:25.785986  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:25.957145  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:26.144786  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:26.284049  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:26.455832  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:26.649031  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:26.786550  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:26.954800  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:27.148943  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:27.285886  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:27.454808  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:27.648240  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:27.790651  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:27.957164  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:28.146220  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:28.283725  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:28.467771  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:28.652882  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:28.791873  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:28.957833  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:29.145294  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:29.283370  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:29.462853  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:29.651932  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:29.784739  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:29.955180  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:30.144948  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:30.283937  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:30.454955  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:30.645658  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:30.783690  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:30.955761  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:31.146781  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:31.284084  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:31.455156  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:31.651840  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:31.784136  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:31.955263  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:32.145235  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:32.283167  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:32.454403  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:32.645661  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:32.790010  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:32.962015  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:33.145477  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:33.286934  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:33.455295  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:33.645306  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:33.783064  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:33.955071  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:34.145560  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:34.284666  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:34.456309  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:34.645540  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:34.784119  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:34.955210  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:35.145852  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:35.283850  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:35.456306  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:35.645812  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:35.784114  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:35.959590  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:36.145854  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:36.284449  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:36.455234  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:36.645026  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:36.783382  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:36.957224  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:37.145959  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:37.284891  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:37.456054  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:37.646431  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:37.788433  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:37.954912  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:38.147221  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:38.301441  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:38.455336  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:38.651724  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:38.784673  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:38.961845  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:39.146159  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:39.283945  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:39.461966  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:39.648167  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:39.784116  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:39.956437  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:40.145795  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:40.284003  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:40.455588  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:40.647581  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:40.786302  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:40.956493  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:41.150195  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:41.283216  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:41.455675  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:41.645890  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:41.783966  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:41.957436  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:42.147474  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:42.284577  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:42.454936  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:42.646096  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:42.783438  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:42.956694  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:43.145829  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:43.289512  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:43.454987  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:43.645225  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:43.783644  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:43.959497  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:44.145498  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:44.283730  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:44.461072  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:44.650877  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:44.784640  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:44.956226  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:45.145974  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:45.291155  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:45.458466  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0907 00:12:45.645582  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:45.784619  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:45.955010  297008 kapi.go:107] duration metric: took 1m44.003663245s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0907 00:12:46.145297  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:46.285051  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:46.646015  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:46.784956  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:47.146835  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:47.284763  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:47.645565  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:47.784480  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:48.145966  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:48.284486  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:48.645652  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:48.784078  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:49.148301  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:49.283465  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:49.653701  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:49.784006  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:50.147984  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:50.284732  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:50.646596  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:50.784737  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:51.158531  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:51.283890  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:51.645667  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:51.784451  297008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0907 00:12:52.146263  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:52.283485  297008 kapi.go:107] duration metric: took 1m51.003289317s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0907 00:12:52.645922  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:53.145667  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:53.518287  297008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0907 00:12:53.645712  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:54.146244  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:54.632326  297008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.113997655s)
	W0907 00:12:54.632413  297008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0907 00:12:54.632528  297008 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0907 00:12:54.661440  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:55.145165  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:55.645617  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:56.147926  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:56.645812  297008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0907 00:12:57.145272  297008 kapi.go:107] duration metric: took 1m49.003289549s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0907 00:12:57.148296  297008 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-055380 cluster.
	I0907 00:12:57.151279  297008 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0907 00:12:57.154206  297008 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0907 00:12:57.157215  297008 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, ingress-dns, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0907 00:12:57.160092  297008 addons.go:514] duration metric: took 2m2.389090108s for enable addons: enabled=[cloud-spanner nvidia-device-plugin amd-gpu-device-plugin registry-creds ingress-dns storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0907 00:12:57.160154  297008 start.go:246] waiting for cluster config update ...
	I0907 00:12:57.160176  297008 start.go:255] writing updated cluster config ...
	I0907 00:12:57.160496  297008 ssh_runner.go:195] Run: rm -f paused
	I0907 00:12:57.164133  297008 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0907 00:12:57.168344  297008 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d5ljr" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:57.174991  297008 pod_ready.go:94] pod "coredns-66bc5c9577-d5ljr" is "Ready"
	I0907 00:12:57.175020  297008 pod_ready.go:86] duration metric: took 6.636034ms for pod "coredns-66bc5c9577-d5ljr" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:57.177342  297008 pod_ready.go:83] waiting for pod "etcd-addons-055380" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:57.182105  297008 pod_ready.go:94] pod "etcd-addons-055380" is "Ready"
	I0907 00:12:57.182132  297008 pod_ready.go:86] duration metric: took 4.764724ms for pod "etcd-addons-055380" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:57.184741  297008 pod_ready.go:83] waiting for pod "kube-apiserver-addons-055380" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:57.190466  297008 pod_ready.go:94] pod "kube-apiserver-addons-055380" is "Ready"
	I0907 00:12:57.190497  297008 pod_ready.go:86] duration metric: took 5.725511ms for pod "kube-apiserver-addons-055380" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:57.192899  297008 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-055380" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:57.567901  297008 pod_ready.go:94] pod "kube-controller-manager-addons-055380" is "Ready"
	I0907 00:12:57.567929  297008 pod_ready.go:86] duration metric: took 375.003505ms for pod "kube-controller-manager-addons-055380" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:57.768183  297008 pod_ready.go:83] waiting for pod "kube-proxy-g28wn" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:58.168510  297008 pod_ready.go:94] pod "kube-proxy-g28wn" is "Ready"
	I0907 00:12:58.168538  297008 pod_ready.go:86] duration metric: took 400.330097ms for pod "kube-proxy-g28wn" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:58.368618  297008 pod_ready.go:83] waiting for pod "kube-scheduler-addons-055380" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:58.768102  297008 pod_ready.go:94] pod "kube-scheduler-addons-055380" is "Ready"
	I0907 00:12:58.768133  297008 pod_ready.go:86] duration metric: took 399.486786ms for pod "kube-scheduler-addons-055380" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:12:58.768147  297008 pod_ready.go:40] duration metric: took 1.603973572s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0907 00:12:58.828727  297008 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0907 00:12:58.832247  297008 out.go:179] * Done! kubectl is now configured to use "addons-055380" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 07 00:14:50 addons-055380 crio[986]: time="2025-09-07 00:14:50.061860163Z" level=info msg="Removed pod sandbox: 3ae17caa17c4b28b0cf192cd8ffed14d3b4ab2ee29e0eb57c632545e822d1726" id=a6376f09-9568-4767-8e48-9e56b2c30d06 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 07 00:17:00 addons-055380 crio[986]: time="2025-09-07 00:17:00.855859809Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-2vhn5/POD" id=face2bc9-1d20-4272-a208-6e3e72d7d32e name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 07 00:17:00 addons-055380 crio[986]: time="2025-09-07 00:17:00.855923769Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 07 00:17:00 addons-055380 crio[986]: time="2025-09-07 00:17:00.916791825Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2vhn5 Namespace:default ID:ecf66dc08df97901cf8de6f7b09431da1c930b6888373cd248aab9dcfabd480b UID:7112b204-0d28-46e4-82a1-9e829c367655 NetNS:/var/run/netns/22fa20c4-2cf0-4a9b-8419-d5c802980883 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 07 00:17:00 addons-055380 crio[986]: time="2025-09-07 00:17:00.919718109Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-2vhn5 to CNI network \"kindnet\" (type=ptp)"
	Sep 07 00:17:00 addons-055380 crio[986]: time="2025-09-07 00:17:00.958462979Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2vhn5 Namespace:default ID:ecf66dc08df97901cf8de6f7b09431da1c930b6888373cd248aab9dcfabd480b UID:7112b204-0d28-46e4-82a1-9e829c367655 NetNS:/var/run/netns/22fa20c4-2cf0-4a9b-8419-d5c802980883 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 07 00:17:00 addons-055380 crio[986]: time="2025-09-07 00:17:00.958627576Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-2vhn5 for CNI network kindnet (type=ptp)"
	Sep 07 00:17:00 addons-055380 crio[986]: time="2025-09-07 00:17:00.964219218Z" level=info msg="Ran pod sandbox ecf66dc08df97901cf8de6f7b09431da1c930b6888373cd248aab9dcfabd480b with infra container: default/hello-world-app-5d498dc89-2vhn5/POD" id=face2bc9-1d20-4272-a208-6e3e72d7d32e name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 07 00:17:00 addons-055380 crio[986]: time="2025-09-07 00:17:00.967265536Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=94cbeb66-36a5-4c4d-a17a-13868a342926 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:17:00 addons-055380 crio[986]: time="2025-09-07 00:17:00.967503848Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=94cbeb66-36a5-4c4d-a17a-13868a342926 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:17:00 addons-055380 crio[986]: time="2025-09-07 00:17:00.969210246Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=52ab8583-41b8-45a7-b9e6-0f4e8019a780 name=/runtime.v1.ImageService/PullImage
	Sep 07 00:17:00 addons-055380 crio[986]: time="2025-09-07 00:17:00.971807116Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 07 00:17:01 addons-055380 crio[986]: time="2025-09-07 00:17:01.222788337Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 07 00:17:01 addons-055380 crio[986]: time="2025-09-07 00:17:01.982792322Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=52ab8583-41b8-45a7-b9e6-0f4e8019a780 name=/runtime.v1.ImageService/PullImage
	Sep 07 00:17:01 addons-055380 crio[986]: time="2025-09-07 00:17:01.983468801Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=99b36e1e-1132-4cf1-ad61-5bee70ca1dcd name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:17:01 addons-055380 crio[986]: time="2025-09-07 00:17:01.984320987Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=99b36e1e-1132-4cf1-ad61-5bee70ca1dcd name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:17:01 addons-055380 crio[986]: time="2025-09-07 00:17:01.985576204Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3f87e40e-52c1-434c-abfd-d8d37237ba93 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:17:01 addons-055380 crio[986]: time="2025-09-07 00:17:01.986530069Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3f87e40e-52c1-434c-abfd-d8d37237ba93 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:17:01 addons-055380 crio[986]: time="2025-09-07 00:17:01.993123806Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-2vhn5/hello-world-app" id=8f69b0dd-5558-498f-a035-39d09e639581 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 07 00:17:01 addons-055380 crio[986]: time="2025-09-07 00:17:01.993416823Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 07 00:17:02 addons-055380 crio[986]: time="2025-09-07 00:17:02.023185671Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0ba3cee9eb673925d16a8a3c403f757ec2d07b446d606e06a60658983c52f65d/merged/etc/passwd: no such file or directory"
	Sep 07 00:17:02 addons-055380 crio[986]: time="2025-09-07 00:17:02.023443995Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0ba3cee9eb673925d16a8a3c403f757ec2d07b446d606e06a60658983c52f65d/merged/etc/group: no such file or directory"
	Sep 07 00:17:02 addons-055380 crio[986]: time="2025-09-07 00:17:02.117055874Z" level=info msg="Created container 71830dfe382b61c9d9f8cdc9679b0f8c1ff213e939f097433f75ccd9a58fe99a: default/hello-world-app-5d498dc89-2vhn5/hello-world-app" id=8f69b0dd-5558-498f-a035-39d09e639581 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 07 00:17:02 addons-055380 crio[986]: time="2025-09-07 00:17:02.117981324Z" level=info msg="Starting container: 71830dfe382b61c9d9f8cdc9679b0f8c1ff213e939f097433f75ccd9a58fe99a" id=cad73104-d47d-49ed-bfe4-eb8c14d1a407 name=/runtime.v1.RuntimeService/StartContainer
	Sep 07 00:17:02 addons-055380 crio[986]: time="2025-09-07 00:17:02.128609452Z" level=info msg="Started container" PID=10221 containerID=71830dfe382b61c9d9f8cdc9679b0f8c1ff213e939f097433f75ccd9a58fe99a description=default/hello-world-app-5d498dc89-2vhn5/hello-world-app id=cad73104-d47d-49ed-bfe4-eb8c14d1a407 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ecf66dc08df97901cf8de6f7b09431da1c930b6888373cd248aab9dcfabd480b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	71830dfe382b6       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   ecf66dc08df97       hello-world-app-5d498dc89-2vhn5
	75965b717e665       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   f00b5c27d08ee       nginx
	875852497beea       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   13bae1e4a61df       busybox
	af3c72ba951c8       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             4 minutes ago            Running             controller                0                   5b8a8c4ab91c6       ingress-nginx-controller-9cc49f96f-xlt69
	f0dd1414d45ee       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:b3f8a40cecf84afd8a5299442eab04c52f913ef6194e01dc4fbeb783f9d42c58            4 minutes ago            Running             gadget                    0                   9096c2653f0dd       gadget-lks2d
	853dd919f68a1       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                             4 minutes ago            Exited              patch                     2                   11585a0664491       ingress-nginx-admission-patch-t2gk8
	9d18305614a12       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              create                    0                   9420bfb43e354       ingress-nginx-admission-create-7zjzb
	8e19066650cba       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958               4 minutes ago            Running             minikube-ingress-dns      0                   e6f6789352207       kube-ingress-dns-minikube
	be7d4447d0187       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner       0                   cb5cc1bab0ac2       storage-provisioner
	246978ad7c36b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                             5 minutes ago            Running             coredns                   0                   e831dc597e8e8       coredns-66bc5c9577-d5ljr
	fd707258a546a       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                             6 minutes ago            Running             kube-proxy                0                   453c5114dc10c       kube-proxy-g28wn
	fa6cf378ac1f1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                             6 minutes ago            Running             kindnet-cni               0                   2c4027b535927       kindnet-l24xr
	51806db75734e       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                                             6 minutes ago            Running             kube-apiserver            0                   7d6ffb9f9fa25       kube-apiserver-addons-055380
	a982de7eea774       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                             6 minutes ago            Running             etcd                      0                   6386c171083b8       etcd-addons-055380
	a7a41f681e19a       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                             6 minutes ago            Running             kube-scheduler            0                   80d4f005b966b       kube-scheduler-addons-055380
	abed862c8045e       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                             6 minutes ago            Running             kube-controller-manager   0                   ec4cf0ef83b84       kube-controller-manager-addons-055380
	
	
	==> coredns [246978ad7c36b8f4c49e2097b62df1c342652898c3462c0c00488e00faba9179] <==
	[INFO] 10.244.0.6:41290 - 18745 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002160122s
	[INFO] 10.244.0.6:41290 - 36099 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000121584s
	[INFO] 10.244.0.6:41290 - 6035 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000095311s
	[INFO] 10.244.0.6:49874 - 45170 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00015822s
	[INFO] 10.244.0.6:49874 - 44916 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000178217s
	[INFO] 10.244.0.6:40200 - 30393 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119205s
	[INFO] 10.244.0.6:40200 - 29957 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000140711s
	[INFO] 10.244.0.6:35436 - 5503 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000133162s
	[INFO] 10.244.0.6:35436 - 5306 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000213187s
	[INFO] 10.244.0.6:59165 - 38685 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001456298s
	[INFO] 10.244.0.6:59165 - 38497 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001523539s
	[INFO] 10.244.0.6:48581 - 24339 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000137117s
	[INFO] 10.244.0.6:48581 - 24158 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000166853s
	[INFO] 10.244.0.21:56368 - 60199 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000231444s
	[INFO] 10.244.0.21:38122 - 61340 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000328331s
	[INFO] 10.244.0.21:57129 - 30890 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011306s
	[INFO] 10.244.0.21:45769 - 24384 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000185618s
	[INFO] 10.244.0.21:40012 - 28182 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000255042s
	[INFO] 10.244.0.21:53716 - 8331 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00026136s
	[INFO] 10.244.0.21:43905 - 61475 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002212202s
	[INFO] 10.244.0.21:42069 - 55541 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002097256s
	[INFO] 10.244.0.21:38276 - 54757 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001537358s
	[INFO] 10.244.0.21:60085 - 52136 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003404105s
	[INFO] 10.244.0.24:52851 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000309213s
	[INFO] 10.244.0.24:57982 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137823s
	
	
	==> describe nodes <==
	Name:               addons-055380
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-055380
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=196d69ba373adb3ed4fbcc87dc5d81b7f1adbb1d
	                    minikube.k8s.io/name=addons-055380
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_07T00_10_50_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-055380
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Sep 2025 00:10:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-055380
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Sep 2025 00:16:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Sep 2025 00:14:55 +0000   Sun, 07 Sep 2025 00:10:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Sep 2025 00:14:55 +0000   Sun, 07 Sep 2025 00:10:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Sep 2025 00:14:55 +0000   Sun, 07 Sep 2025 00:10:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Sep 2025 00:14:55 +0000   Sun, 07 Sep 2025 00:11:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-055380
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd49d982ed8d4c25bfaa49767fd16c10
	  System UUID:                77ed9b75-72d1-4464-bc86-567088105075
	  Boot ID:                    beae285a-afb1-41fb-a1c4-2915721f6659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  default                     hello-world-app-5d498dc89-2vhn5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-lks2d                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-xlt69    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-d5ljr                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m5s
	  kube-system                 etcd-addons-055380                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m13s
	  kube-system                 kindnet-l24xr                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m8s
	  kube-system                 kube-apiserver-addons-055380                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-controller-manager-addons-055380       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-proxy-g28wn                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-scheduler-addons-055380                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 6m1s   kube-proxy       
	  Normal   Starting                 6m13s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m13s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m13s  kubelet          Node addons-055380 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m13s  kubelet          Node addons-055380 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m13s  kubelet          Node addons-055380 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m8s   node-controller  Node addons-055380 event: Registered Node addons-055380 in Controller
	  Normal   NodeReady                5m24s  kubelet          Node addons-055380 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 6 22:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.013704] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510843] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033312] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.768135] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.749154] kauditd_printk_skb: 36 callbacks suppressed
	[Sep 6 23:22] hrtimer: interrupt took 27192686 ns
	[Sep 7 00:09] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [a982de7eea77472b7f89ec39aab441479534f5f25b1ef60ff41d1d294dee3f8d] <==
	{"level":"info","ts":"2025-09-07T00:10:55.793896Z","caller":"traceutil/trace.go:172","msg":"trace[504435582] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:335; }","duration":"168.641502ms","start":"2025-09-07T00:10:55.625241Z","end":"2025-09-07T00:10:55.793883Z","steps":["trace[504435582] 'agreement among raft nodes before linearized reading'  (duration: 138.806584ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-07T00:10:56.477285Z","caller":"traceutil/trace.go:172","msg":"trace[524787833] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"162.895172ms","start":"2025-09-07T00:10:56.314363Z","end":"2025-09-07T00:10:56.477258Z","steps":["trace[524787833] 'process raft request'  (duration: 162.798539ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-07T00:10:56.698675Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.884498ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-07T00:10:56.709418Z","caller":"traceutil/trace.go:172","msg":"trace[1223814704] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:337; }","duration":"231.629066ms","start":"2025-09-07T00:10:56.477762Z","end":"2025-09-07T00:10:56.709391Z","steps":["trace[1223814704] 'range keys from in-memory index tree'  (duration: 220.861877ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-07T00:10:56.715755Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.166925ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128039796704040016 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-m56g5\" mod_revision:26 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-m56g5\" value_size:1331 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-m56g5\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-07T00:10:56.730975Z","caller":"traceutil/trace.go:172","msg":"trace[1406610155] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"403.553956ms","start":"2025-09-07T00:10:56.327403Z","end":"2025-09-07T00:10:56.730957Z","steps":["trace[1406610155] 'process raft request'  (duration: 285.90961ms)","trace[1406610155] 'compare'  (duration: 95.953781ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-07T00:10:56.763823Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-07T00:10:56.327378Z","time spent":"436.371744ms","remote":"127.0.0.1:56834","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1385,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-m56g5\" mod_revision:26 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-m56g5\" value_size:1331 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-m56g5\" > >"}
	{"level":"info","ts":"2025-09-07T00:10:56.773403Z","caller":"traceutil/trace.go:172","msg":"trace[928742310] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"445.833424ms","start":"2025-09-07T00:10:56.327550Z","end":"2025-09-07T00:10:56.773384Z","steps":["trace[928742310] 'process raft request'  (duration: 403.357762ms)","trace[928742310] 'compare'  (duration: 34.084053ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-07T00:10:56.776290Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-07T00:10:56.327541Z","time spent":"448.519734ms","remote":"127.0.0.1:57038","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3742,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/clusterroles/admin\" mod_revision:327 > success:<request_put:<key:\"/registry/clusterroles/admin\" value_size:3706 >> failure:<request_range:<key:\"/registry/clusterroles/admin\" > >"}
	{"level":"info","ts":"2025-09-07T00:10:58.342858Z","caller":"traceutil/trace.go:172","msg":"trace[1094752736] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"220.745953ms","start":"2025-09-07T00:10:58.122090Z","end":"2025-09-07T00:10:58.342836Z","steps":["trace[1094752736] 'process raft request'  (duration: 213.726324ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-07T00:10:58.403230Z","caller":"traceutil/trace.go:172","msg":"trace[816763064] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"232.889168ms","start":"2025-09-07T00:10:58.170325Z","end":"2025-09-07T00:10:58.403214Z","steps":["trace[816763064] 'process raft request'  (duration: 232.806476ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-07T00:10:58.403563Z","caller":"traceutil/trace.go:172","msg":"trace[1628237039] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"233.572364ms","start":"2025-09-07T00:10:58.169983Z","end":"2025-09-07T00:10:58.403556Z","steps":["trace[1628237039] 'process raft request'  (duration: 228.586585ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-07T00:10:58.517466Z","caller":"traceutil/trace.go:172","msg":"trace[1140396908] linearizableReadLoop","detail":"{readStateIndex:360; appliedIndex:360; }","duration":"113.973491ms","start":"2025-09-07T00:10:58.403473Z","end":"2025-09-07T00:10:58.517446Z","steps":["trace[1140396908] 'read index received'  (duration: 113.965606ms)","trace[1140396908] 'applied index is now lower than readState.Index'  (duration: 6.851µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-07T00:10:58.558618Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"222.619204ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-07T00:10:58.559992Z","caller":"traceutil/trace.go:172","msg":"trace[2044365451] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:349; }","duration":"224.000349ms","start":"2025-09-07T00:10:58.335974Z","end":"2025-09-07T00:10:58.559974Z","steps":["trace[2044365451] 'agreement among raft nodes before linearized reading'  (duration: 222.563852ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-07T00:10:58.517786Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.748358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry-creds\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-07T00:10:58.572891Z","caller":"traceutil/trace.go:172","msg":"trace[669594334] range","detail":"{range_begin:/registry/deployments/kube-system/registry-creds; range_end:; response_count:0; response_revision:348; }","duration":"236.860639ms","start":"2025-09-07T00:10:58.336015Z","end":"2025-09-07T00:10:58.572875Z","steps":["trace[669594334] 'agreement among raft nodes before linearized reading'  (duration: 181.722848ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-07T00:10:58.528992Z","caller":"traceutil/trace.go:172","msg":"trace[1392502851] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"192.857999ms","start":"2025-09-07T00:10:58.336105Z","end":"2025-09-07T00:10:58.528963Z","steps":["trace[1392502851] 'process raft request'  (duration: 181.729207ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-07T00:10:58.654930Z","caller":"traceutil/trace.go:172","msg":"trace[1723046318] transaction","detail":"{read_only:false; response_revision:350; number_of_response:1; }","duration":"125.11501ms","start":"2025-09-07T00:10:58.529794Z","end":"2025-09-07T00:10:58.654909Z","steps":["trace[1723046318] 'process raft request'  (duration: 79.882233ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-07T00:11:01.929681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:11:01.958299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:11:24.052195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:11:24.097727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:11:24.140102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:11:24.150542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55142","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:17:03 up  1:59,  0 users,  load average: 0.56, 1.76, 2.64
	Linux addons-055380 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [fa6cf378ac1f1ba91290a54b1b76dfe0809b65272f6478ca32375de8271ea381] <==
	I0907 00:14:58.408351       1 main.go:301] handling current node
	I0907 00:15:08.408370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:15:08.408401       1 main.go:301] handling current node
	I0907 00:15:18.409058       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:15:18.409093       1 main.go:301] handling current node
	I0907 00:15:28.408547       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:15:28.408580       1 main.go:301] handling current node
	I0907 00:15:38.408899       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:15:38.408943       1 main.go:301] handling current node
	I0907 00:15:48.408733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:15:48.408763       1 main.go:301] handling current node
	I0907 00:15:58.408360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:15:58.408478       1 main.go:301] handling current node
	I0907 00:16:08.408803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:16:08.408860       1 main.go:301] handling current node
	I0907 00:16:18.408834       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:16:18.408867       1 main.go:301] handling current node
	I0907 00:16:28.408624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:16:28.408663       1 main.go:301] handling current node
	I0907 00:16:38.408714       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:16:38.408749       1 main.go:301] handling current node
	I0907 00:16:48.408435       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:16:48.408472       1 main.go:301] handling current node
	I0907 00:16:58.408206       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:16:58.408312       1 main.go:301] handling current node
	
	
	==> kube-apiserver [51806db75734ebd035a4d69e74eb5b0269388cf1197370ef0d5a1c8f239c50f0] <==
	I0907 00:13:22.823623       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.51.161"}
	E0907 00:14:11.119951       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0907 00:14:24.032909       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:14:25.737026       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0907 00:14:30.218104       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:14:38.455503       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0907 00:14:38.753795       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.213.117"}
	I0907 00:14:42.693187       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0907 00:14:42.693308       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0907 00:14:42.723049       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0907 00:14:42.723201       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0907 00:14:42.730361       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0907 00:14:42.730500       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0907 00:14:42.748698       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0907 00:14:42.748755       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0907 00:14:42.777457       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0907 00:14:42.777496       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0907 00:14:43.730755       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0907 00:14:43.778217       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0907 00:14:43.893285       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0907 00:14:55.045995       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0907 00:15:26.154026       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:15:51.823686       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:16:42.226000       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:17:00.772801       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.30.28"}
	
	
	==> kube-controller-manager [abed862c8045ed3f8aff8c234eeeb305431754ff1f67817db4f60a023d48d085] <==
	I0907 00:14:54.245580       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0907 00:14:59.564350       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:14:59.566145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0907 00:15:00.674827       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:15:00.695214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0907 00:15:03.512931       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:15:03.513982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0907 00:15:13.141771       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:15:13.142857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0907 00:15:18.953306       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:15:18.954401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0907 00:15:28.610039       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:15:28.611195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0907 00:15:48.965187       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:15:48.966223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0907 00:15:51.748162       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:15:51.749209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0907 00:15:59.932452       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:15:59.933557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0907 00:16:36.387540       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:16:36.388708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0907 00:16:47.906819       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:16:47.907863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0907 00:16:49.121783       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0907 00:16:49.122772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [fd707258a546a6e6579cd1cf3ccaee3557461ba418f986015a836294073d5960] <==
	I0907 00:11:00.589050       1 server_linux.go:53] "Using iptables proxy"
	I0907 00:11:00.841922       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0907 00:11:00.942715       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0907 00:11:00.947178       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0907 00:11:00.947354       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0907 00:11:01.556862       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0907 00:11:01.557009       1 server_linux.go:132] "Using iptables Proxier"
	I0907 00:11:01.562646       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0907 00:11:01.563341       1 server.go:527] "Version info" version="v1.34.0"
	I0907 00:11:01.563424       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:11:01.565222       1 config.go:106] "Starting endpoint slice config controller"
	I0907 00:11:01.566096       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0907 00:11:01.565304       1 config.go:403] "Starting serviceCIDR config controller"
	I0907 00:11:01.566192       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0907 00:11:01.566070       1 config.go:309] "Starting node config controller"
	I0907 00:11:01.566268       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0907 00:11:01.566296       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0907 00:11:01.566828       1 config.go:200] "Starting service config controller"
	I0907 00:11:01.566886       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0907 00:11:01.668964       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0907 00:11:01.669038       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0907 00:11:01.669853       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a7a41f681e19a28478f1d50899642d07552fc37335b0c357058b3cd5eccf5df2] <==
	I0907 00:10:47.485217       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:10:47.487627       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0907 00:10:47.487730       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:10:47.487752       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:10:47.487767       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0907 00:10:47.501860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0907 00:10:47.501944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0907 00:10:47.502009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0907 00:10:47.502094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0907 00:10:47.502105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0907 00:10:47.502168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0907 00:10:47.502203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0907 00:10:47.502253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0907 00:10:47.502305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0907 00:10:47.502351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0907 00:10:47.502394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0907 00:10:47.502397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0907 00:10:47.502446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0907 00:10:47.502493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0907 00:10:47.502560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0907 00:10:47.502568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0907 00:10:47.502617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0907 00:10:47.502714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0907 00:10:47.502727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0907 00:10:49.088161       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 07 00:16:19 addons-055380 kubelet[1537]: E0907 00:16:19.829373    1537 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757204179829075370 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 07 00:16:29 addons-055380 kubelet[1537]: E0907 00:16:29.831597    1537 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757204189831350458 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 07 00:16:29 addons-055380 kubelet[1537]: E0907 00:16:29.831636    1537 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757204189831350458 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 07 00:16:33 addons-055380 kubelet[1537]: E0907 00:16:33.428159    1537 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b5d9a8029cbb1910c8a149c722382d2261426700247c86598e8195a70e9936ad/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b5d9a8029cbb1910c8a149c722382d2261426700247c86598e8195a70e9936ad/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:16:39 addons-055380 kubelet[1537]: E0907 00:16:39.362997    1537 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/61313678e6255b07c402fb4e40692907246b84803e617936902fae9aa92d5f09/diff" to get inode usage: stat /var/lib/containers/storage/overlay/61313678e6255b07c402fb4e40692907246b84803e617936902fae9aa92d5f09/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:16:39 addons-055380 kubelet[1537]: E0907 00:16:39.373162    1537 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b07443dca07a261a6e86c89282b430659f30cb2c511a43dd32fd0a021497c011/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b07443dca07a261a6e86c89282b430659f30cb2c511a43dd32fd0a021497c011/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:16:39 addons-055380 kubelet[1537]: E0907 00:16:39.834516    1537 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757204199834224639 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 07 00:16:39 addons-055380 kubelet[1537]: E0907 00:16:39.834554    1537 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757204199834224639 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 07 00:16:39 addons-055380 kubelet[1537]: E0907 00:16:39.843844    1537 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9f9b2caf4b1a07d36b9ba4cd1423d03a71d86906265c730909564ebb08ec6049/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9f9b2caf4b1a07d36b9ba4cd1423d03a71d86906265c730909564ebb08ec6049/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:16:39 addons-055380 kubelet[1537]: E0907 00:16:39.956754    1537 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c169d74ec2e17ff3aa5473d7b154aae8632785c39f6644a3fe5b86f1cf692488/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c169d74ec2e17ff3aa5473d7b154aae8632785c39f6644a3fe5b86f1cf692488/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:16:39 addons-055380 kubelet[1537]: E0907 00:16:39.958912    1537 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/be4ee32af9e8b27d666b2c01ae81dc23170a2d39b30c9cd78cc2587f3e1ef9d6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/be4ee32af9e8b27d666b2c01ae81dc23170a2d39b30c9cd78cc2587f3e1ef9d6/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:16:49 addons-055380 kubelet[1537]: E0907 00:16:49.657680    1537 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b07443dca07a261a6e86c89282b430659f30cb2c511a43dd32fd0a021497c011/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b07443dca07a261a6e86c89282b430659f30cb2c511a43dd32fd0a021497c011/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:16:49 addons-055380 kubelet[1537]: E0907 00:16:49.658481    1537 manager.go:1116] Failed to create existing container: /docker/92432128cb4e9191a8727bbe91ebb44a5ef14a410162d8bca9f99b1552835ca2/crio-12e9e88237179771bf21ff9440949db1ea3b64c650382d9314e775158d6529bb: Error finding container 12e9e88237179771bf21ff9440949db1ea3b64c650382d9314e775158d6529bb: Status 404 returned error can't find the container with id 12e9e88237179771bf21ff9440949db1ea3b64c650382d9314e775158d6529bb
	Sep 07 00:16:49 addons-055380 kubelet[1537]: E0907 00:16:49.658704    1537 manager.go:1116] Failed to create existing container: /crio-1c6af95eb6f5dd51cd89869e17ef11db4ed976f53c8a30ca0df8ffc8ed93cdce: Error finding container 1c6af95eb6f5dd51cd89869e17ef11db4ed976f53c8a30ca0df8ffc8ed93cdce: Status 404 returned error can't find the container with id 1c6af95eb6f5dd51cd89869e17ef11db4ed976f53c8a30ca0df8ffc8ed93cdce
	Sep 07 00:16:49 addons-055380 kubelet[1537]: E0907 00:16:49.660102    1537 manager.go:1116] Failed to create existing container: /crio-12e9e88237179771bf21ff9440949db1ea3b64c650382d9314e775158d6529bb: Error finding container 12e9e88237179771bf21ff9440949db1ea3b64c650382d9314e775158d6529bb: Status 404 returned error can't find the container with id 12e9e88237179771bf21ff9440949db1ea3b64c650382d9314e775158d6529bb
	Sep 07 00:16:49 addons-055380 kubelet[1537]: E0907 00:16:49.691049    1537 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/61313678e6255b07c402fb4e40692907246b84803e617936902fae9aa92d5f09/diff" to get inode usage: stat /var/lib/containers/storage/overlay/61313678e6255b07c402fb4e40692907246b84803e617936902fae9aa92d5f09/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:16:49 addons-055380 kubelet[1537]: E0907 00:16:49.694626    1537 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/317d014308118bbb9d306323a84d3134d77a7f0a4fe59dbe1ac1623b6ece1ec8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/317d014308118bbb9d306323a84d3134d77a7f0a4fe59dbe1ac1623b6ece1ec8/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:16:49 addons-055380 kubelet[1537]: E0907 00:16:49.837095    1537 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757204209836770484 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 07 00:16:49 addons-055380 kubelet[1537]: E0907 00:16:49.837135    1537 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757204209836770484 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 07 00:16:51 addons-055380 kubelet[1537]: E0907 00:16:51.712274    1537 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6c89c9155bf5a3727f27eaeb834915181180fcd349774be0083fc9c1b6926a48/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6c89c9155bf5a3727f27eaeb834915181180fcd349774be0083fc9c1b6926a48/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:16:58 addons-055380 kubelet[1537]: I0907 00:16:58.526528    1537 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 07 00:16:59 addons-055380 kubelet[1537]: E0907 00:16:59.839719    1537 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757204219839439969 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 07 00:16:59 addons-055380 kubelet[1537]: E0907 00:16:59.839760    1537 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757204219839439969 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 07 00:17:00 addons-055380 kubelet[1537]: I0907 00:17:00.714228    1537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94zmf\" (UniqueName: \"kubernetes.io/projected/7112b204-0d28-46e4-82a1-9e829c367655-kube-api-access-94zmf\") pod \"hello-world-app-5d498dc89-2vhn5\" (UID: \"7112b204-0d28-46e4-82a1-9e829c367655\") " pod="default/hello-world-app-5d498dc89-2vhn5"
	Sep 07 00:17:00 addons-055380 kubelet[1537]: W0907 00:17:00.966221    1537 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/92432128cb4e9191a8727bbe91ebb44a5ef14a410162d8bca9f99b1552835ca2/crio-ecf66dc08df97901cf8de6f7b09431da1c930b6888373cd248aab9dcfabd480b WatchSource:0}: Error finding container ecf66dc08df97901cf8de6f7b09431da1c930b6888373cd248aab9dcfabd480b: Status 404 returned error can't find the container with id ecf66dc08df97901cf8de6f7b09431da1c930b6888373cd248aab9dcfabd480b
	
	
	==> storage-provisioner [be7d4447d0187020ed1e39b252a4594fcda5b702676f1913eb1ce55329c2264d] <==
	W0907 00:16:38.159976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:40.163127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:40.168264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:42.171807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:42.179691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:44.183333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:44.188181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:46.191172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:46.199659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:48.202619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:48.207216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:50.209895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:50.216740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:52.220409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:52.224922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:54.228187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:54.234521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:56.237458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:56.241826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:58.245469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:16:58.252342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:17:00.270466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:17:00.286925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:17:02.291678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:17:02.301319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-055380 -n addons-055380
helpers_test.go:269: (dbg) Run:  kubectl --context addons-055380 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-7zjzb ingress-nginx-admission-patch-t2gk8
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-055380 describe pod ingress-nginx-admission-create-7zjzb ingress-nginx-admission-patch-t2gk8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-055380 describe pod ingress-nginx-admission-create-7zjzb ingress-nginx-admission-patch-t2gk8: exit status 1 (90.660102ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7zjzb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-t2gk8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-055380 describe pod ingress-nginx-admission-create-7zjzb ingress-nginx-admission-patch-t2gk8: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-055380 addons disable ingress-dns --alsologtostderr -v=1: (1.52751917s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-055380 addons disable ingress --alsologtostderr -v=1: (7.80977682s)
--- FAIL: TestAddons/parallel/Ingress (155.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-258398 --alsologtostderr -v=1]
E0907 00:32:59.743616  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:34:22.819854  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-258398 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-258398 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-258398 --alsologtostderr -v=1] stderr:
I0907 00:31:31.213336  328207 out.go:360] Setting OutFile to fd 1 ...
I0907 00:31:31.214124  328207 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:31:31.214149  328207 out.go:374] Setting ErrFile to fd 2...
I0907 00:31:31.214174  328207 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:31:31.214573  328207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
I0907 00:31:31.214941  328207 mustload.go:65] Loading cluster: functional-258398
I0907 00:31:31.215657  328207 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:31:31.216937  328207 cli_runner.go:164] Run: docker container inspect functional-258398 --format={{.State.Status}}
I0907 00:31:31.236322  328207 host.go:66] Checking if "functional-258398" exists ...
I0907 00:31:31.236636  328207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0907 00:31:31.294278  328207 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-07 00:31:31.285071729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0907 00:31:31.294394  328207 api_server.go:166] Checking apiserver status ...
I0907 00:31:31.294463  328207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0907 00:31:31.294499  328207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
I0907 00:31:31.312290  328207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
I0907 00:31:31.407852  328207 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4632/cgroup
I0907 00:31:31.417162  328207 api_server.go:182] apiserver freezer: "2:freezer:/docker/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/crio/crio-c38cff0c3281b8444ca0b2dfd923df7bb0db6a01d9ae78c16747f8aebcb0febe"
I0907 00:31:31.417241  328207 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/crio/crio-c38cff0c3281b8444ca0b2dfd923df7bb0db6a01d9ae78c16747f8aebcb0febe/freezer.state
I0907 00:31:31.426363  328207 api_server.go:204] freezer state: "THAWED"
I0907 00:31:31.426391  328207 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0907 00:31:31.434677  328207 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0907 00:31:31.434724  328207 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0907 00:31:31.434903  328207 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:31:31.434928  328207 addons.go:69] Setting dashboard=true in profile "functional-258398"
I0907 00:31:31.434947  328207 addons.go:238] Setting addon dashboard=true in "functional-258398"
I0907 00:31:31.434973  328207 host.go:66] Checking if "functional-258398" exists ...
I0907 00:31:31.435385  328207 cli_runner.go:164] Run: docker container inspect functional-258398 --format={{.State.Status}}
I0907 00:31:31.455151  328207 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0907 00:31:31.457902  328207 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0907 00:31:31.460746  328207 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0907 00:31:31.460767  328207 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0907 00:31:31.460905  328207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
I0907 00:31:31.478004  328207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
I0907 00:31:31.580046  328207 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0907 00:31:31.580072  328207 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0907 00:31:31.598985  328207 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0907 00:31:31.599012  328207 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0907 00:31:31.618338  328207 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0907 00:31:31.618361  328207 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0907 00:31:31.640897  328207 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0907 00:31:31.640918  328207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0907 00:31:31.659708  328207 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0907 00:31:31.659759  328207 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0907 00:31:31.678842  328207 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0907 00:31:31.678891  328207 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0907 00:31:31.698029  328207 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0907 00:31:31.698053  328207 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0907 00:31:31.717456  328207 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0907 00:31:31.717481  328207 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0907 00:31:31.737020  328207 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0907 00:31:31.737045  328207 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0907 00:31:31.757955  328207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0907 00:31:32.514488  328207 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-258398 addons enable metrics-server

                                                
                                                
I0907 00:31:32.517336  328207 addons.go:201] Writing out "functional-258398" config to set dashboard=true...
W0907 00:31:32.517627  328207 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0907 00:31:32.518342  328207 kapi.go:59] client config for functional-258398: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt", KeyFile:"/home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.key", CAFile:"/home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2d7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0907 00:31:32.518892  328207 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0907 00:31:32.518912  328207 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0907 00:31:32.518918  328207 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0907 00:31:32.518926  328207 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0907 00:31:32.518931  328207 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0907 00:31:32.534371  328207 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  857b725c-e935-4c57-9bb3-008721bf2350 1514 0 2025-09-07 00:31:32 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-07 00:31:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.96.201.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.201.10],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0907 00:31:32.534530  328207 out.go:285] * Launching proxy ...
* Launching proxy ...
I0907 00:31:32.534608  328207 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-258398 proxy --port 36195]
I0907 00:31:32.534939  328207 dashboard.go:157] Waiting for kubectl to output host:port ...
I0907 00:31:32.614275  328207 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0907 00:31:32.614362  328207 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0907 00:31:32.630812  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a6621053-e28b-4522-a5de-dc68d8a53ae9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x40008303c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054e000 TLS:<nil>}
I0907 00:31:32.630893  328207 retry.go:31] will retry after 100.794µs: Temporary Error: unexpected response code: 503
I0907 00:31:32.634818  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2e638f0f-63d5-497d-8636-5c63bd319221] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000830480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000428140 TLS:<nil>}
I0907 00:31:32.634907  328207 retry.go:31] will retry after 120.4µs: Temporary Error: unexpected response code: 503
I0907 00:31:32.639019  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7d4c95f-3466-4e41-8087-deeebd2bfb01] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000830540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054e3c0 TLS:<nil>}
I0907 00:31:32.639093  328207 retry.go:31] will retry after 329.644µs: Temporary Error: unexpected response code: 503
I0907 00:31:32.642974  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[867aa1e9-ad7e-45bf-a034-bc4fd6250f20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000830640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054e500 TLS:<nil>}
I0907 00:31:32.643052  328207 retry.go:31] will retry after 341.946µs: Temporary Error: unexpected response code: 503
I0907 00:31:32.647237  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fa792f42-25d4-4057-abcb-d189ea9e03b2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x40008306c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054e640 TLS:<nil>}
I0907 00:31:32.647302  328207 retry.go:31] will retry after 262.391µs: Temporary Error: unexpected response code: 503
I0907 00:31:32.651240  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[207d6d5a-4580-4ebf-a986-74f963a10935] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x40008307c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054e780 TLS:<nil>}
I0907 00:31:32.651320  328207 retry.go:31] will retry after 396.224µs: Temporary Error: unexpected response code: 503
I0907 00:31:32.655266  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bd48de8a-982e-40cc-9a32-08c76a2f1e4c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x40008308c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054eb40 TLS:<nil>}
I0907 00:31:32.655400  328207 retry.go:31] will retry after 1.62291ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.660510  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a32b0c0-13ca-4d2d-8f62-0cc3214bb829] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x40008f4240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000428280 TLS:<nil>}
I0907 00:31:32.660584  328207 retry.go:31] will retry after 1.18162ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.665619  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[99a138ab-0080-4ef2-94e0-1e1d7e83ddd2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x40008f4340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004283c0 TLS:<nil>}
I0907 00:31:32.665680  328207 retry.go:31] will retry after 1.609559ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.670863  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[da158955-80ce-4168-a12a-4173a64c529c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000830ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054ec80 TLS:<nil>}
I0907 00:31:32.670927  328207 retry.go:31] will retry after 2.057753ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.675760  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eaa1095e-b35d-4fea-befb-95f47fd26482] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000830b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054edc0 TLS:<nil>}
I0907 00:31:32.675821  328207 retry.go:31] will retry after 6.147772ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.685968  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6b62e135-1984-4117-87c5-15759becd489] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x40008f4580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054ef00 TLS:<nil>}
I0907 00:31:32.686041  328207 retry.go:31] will retry after 5.502505ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.699814  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6fcba1dd-672f-4208-8121-ed9cf28eb8ae] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000830cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000428780 TLS:<nil>}
I0907 00:31:32.699896  328207 retry.go:31] will retry after 11.261975ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.715724  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4f0d6d19-522d-4cfe-9ff6-52b1dcba8312] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000830d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054f040 TLS:<nil>}
I0907 00:31:32.715792  328207 retry.go:31] will retry after 27.506816ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.747088  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[06e35c7e-8b76-4f89-a879-78fae6fa2f58] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000830e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054f180 TLS:<nil>}
I0907 00:31:32.747149  328207 retry.go:31] will retry after 16.573255ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.767672  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[182503ac-bc27-4e5d-940f-2d2619472416] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000830e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054f2c0 TLS:<nil>}
I0907 00:31:32.767743  328207 retry.go:31] will retry after 53.509365ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.824985  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f63071ea-194b-4645-8bb3-33c55700cc06] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000830f80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054f400 TLS:<nil>}
I0907 00:31:32.825051  328207 retry.go:31] will retry after 66.422802ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.895356  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7ae9a57b-8c31-4e28-aba4-3b3a2242b427] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000831140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054f540 TLS:<nil>}
I0907 00:31:32.895431  328207 retry.go:31] will retry after 80.864846ms: Temporary Error: unexpected response code: 503
I0907 00:31:32.979599  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f657b835-de81-46be-b180-779b329436d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:32 GMT]] Body:0x4000831900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054f680 TLS:<nil>}
I0907 00:31:32.979667  328207 retry.go:31] will retry after 212.804506ms: Temporary Error: unexpected response code: 503
I0907 00:31:33.196157  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a34a3a03-8e90-46fd-bc80-dd4a7300d1d5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:33 GMT]] Body:0x4000831980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054f7c0 TLS:<nil>}
I0907 00:31:33.196231  328207 retry.go:31] will retry after 329.541243ms: Temporary Error: unexpected response code: 503
I0907 00:31:33.529716  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f704e30b-aacc-42c1-ab73-0384468e8304] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:33 GMT]] Body:0x40008f4a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004288c0 TLS:<nil>}
I0907 00:31:33.529784  328207 retry.go:31] will retry after 289.021308ms: Temporary Error: unexpected response code: 503
I0907 00:31:33.822002  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a559a47c-e430-4827-a0b8-d59a009e8297] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:33 GMT]] Body:0x4000831ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000428a00 TLS:<nil>}
I0907 00:31:33.822072  328207 retry.go:31] will retry after 549.592707ms: Temporary Error: unexpected response code: 503
I0907 00:31:34.375763  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9c10e96f-6d02-44ff-97b4-84072c791fb2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:34 GMT]] Body:0x40008f4b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054f900 TLS:<nil>}
I0907 00:31:34.375832  328207 retry.go:31] will retry after 580.225577ms: Temporary Error: unexpected response code: 503
I0907 00:31:34.959422  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9d7c1e82-7623-4466-b361-432ed1a2c575] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:34 GMT]] Body:0x40008f4dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054fa40 TLS:<nil>}
I0907 00:31:34.959498  328207 retry.go:31] will retry after 667.254342ms: Temporary Error: unexpected response code: 503
I0907 00:31:35.630753  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[afabd9cc-3a22-43db-ac7e-31d466600b0b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:35 GMT]] Body:0x40008f4e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054fb80 TLS:<nil>}
I0907 00:31:35.630821  328207 retry.go:31] will retry after 960.165275ms: Temporary Error: unexpected response code: 503
I0907 00:31:36.594593  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d4ffbd6c-2ce9-404b-aca3-6e039e197182] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:36 GMT]] Body:0x40008f5000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400054fe00 TLS:<nil>}
I0907 00:31:36.594652  328207 retry.go:31] will retry after 2.217662135s: Temporary Error: unexpected response code: 503
I0907 00:31:38.816029  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ef45d429-5673-4926-b155-f80f22ab653c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:38 GMT]] Body:0x40008f5100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400166a000 TLS:<nil>}
I0907 00:31:38.816096  328207 retry.go:31] will retry after 5.501653336s: Temporary Error: unexpected response code: 503
I0907 00:31:44.321726  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d09a5637-8198-4f8f-b60a-e3831e612963] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:44 GMT]] Body:0x40008f5240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000428b40 TLS:<nil>}
I0907 00:31:44.321790  328207 retry.go:31] will retry after 4.912831015s: Temporary Error: unexpected response code: 503
I0907 00:31:49.239508  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bac672da-2044-4e05-853b-30214deb7208] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:49 GMT]] Body:0x40016a0000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000428dc0 TLS:<nil>}
I0907 00:31:49.239572  328207 retry.go:31] will retry after 8.752479354s: Temporary Error: unexpected response code: 503
I0907 00:31:57.995357  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8d83ef4d-6f82-42fa-a4cf-21725a5b08a6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:31:57 GMT]] Body:0x40016a00c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400166a140 TLS:<nil>}
I0907 00:31:57.995421  328207 retry.go:31] will retry after 12.502592223s: Temporary Error: unexpected response code: 503
I0907 00:32:10.504210  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ab14b7e7-d26d-473b-a11e-9905ad1fe3e1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:32:10 GMT]] Body:0x40016a0180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000428f00 TLS:<nil>}
I0907 00:32:10.504275  328207 retry.go:31] will retry after 10.05186579s: Temporary Error: unexpected response code: 503
I0907 00:32:20.562952  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[83efbd45-5e3e-4959-91d9-4f16bdf3dcd8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:32:20 GMT]] Body:0x40008f5500 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400166a280 TLS:<nil>}
I0907 00:32:20.563016  328207 retry.go:31] will retry after 25.705989422s: Temporary Error: unexpected response code: 503
I0907 00:32:46.272860  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[32afc04b-31c2-414f-8005-ca4521d75e75] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:32:46 GMT]] Body:0x40016a0280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000429040 TLS:<nil>}
I0907 00:32:46.272925  328207 retry.go:31] will retry after 53.419106503s: Temporary Error: unexpected response code: 503
I0907 00:33:39.697607  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8d0acf23-6176-4c5d-91d7-abb12619210b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:33:39 GMT]] Body:0x40008f4140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400166a3c0 TLS:<nil>}
I0907 00:33:39.697678  328207 retry.go:31] will retry after 1m28.355023724s: Temporary Error: unexpected response code: 503
I0907 00:35:08.055975  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[32f2fb37-31b4-4ee3-b108-062f38204f1d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:35:08 GMT]] Body:0x40016a0100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004292c0 TLS:<nil>}
I0907 00:35:08.056044  328207 retry.go:31] will retry after 31.157647686s: Temporary Error: unexpected response code: 503
I0907 00:35:39.216777  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c3e3054b-b23c-42fa-b894-1e5ac68e4fcc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:35:39 GMT]] Body:0x40008f4140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400166a500 TLS:<nil>}
I0907 00:35:39.216868  328207 retry.go:31] will retry after 37.551805262s: Temporary Error: unexpected response code: 503
I0907 00:36:16.773623  328207 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20e21fd2-a184-4911-805f-85d5e4a6cf90] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Sep 2025 00:36:16 GMT]] Body:0x40016a0140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000428000 TLS:<nil>}
I0907 00:36:16.773685  328207 retry.go:31] will retry after 50.936788449s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-258398
helpers_test.go:243: (dbg) docker inspect functional-258398:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515",
	        "Created": "2025-09-07T00:18:23.454660871Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315085,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-07T00:18:23.512135033Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/hosts",
	        "LogPath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515-json.log",
	        "Name": "/functional-258398",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-258398:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-258398",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515",
	                "LowerDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8-init/diff:/var/lib/docker/overlay2/5a4b8b8cbe09f4c7d8197d949f1b03b5a8d427ad9c5a27d0359fd04ab981afab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-258398",
	                "Source": "/var/lib/docker/volumes/functional-258398/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-258398",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-258398",
	                "name.minikube.sigs.k8s.io": "functional-258398",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "70d1c9b2d70f562f4d6c7d385513bf5d0eaaacc02e705345847c2caf08d1de2c",
	            "SandboxKey": "/var/run/docker/netns/70d1c9b2d70f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-258398": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:fb:08:fa:de:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec956df0ad56bd722eaaaf7f53b6bca29823820d69caf85d31f449a0641cba5a",
	                    "EndpointID": "6323a4eda35746f28e310761f04758eede91222887d1875dbe5ab9375cae0ecb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-258398",
	                        "5b933cc290a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-258398 -n functional-258398
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 logs -n 25: (1.793946632s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-258398 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ mount     │ -p functional-258398 /tmp/TestFunctionalparallelMountCmdany-port1538918264/001:/mount-9p --alsologtostderr -v=1                   │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ ssh       │ functional-258398 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh       │ functional-258398 ssh -- ls -la /mount-9p                                                                                         │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh       │ functional-258398 ssh cat /mount-9p/test-1757205078345806115                                                                      │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh       │ functional-258398 ssh stat /mount-9p/created-by-test                                                                              │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh       │ functional-258398 ssh stat /mount-9p/created-by-pod                                                                               │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh       │ functional-258398 ssh sudo umount -f /mount-9p                                                                                    │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ mount     │ -p functional-258398 /tmp/TestFunctionalparallelMountCmdspecific-port3352215716/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ ssh       │ functional-258398 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ ssh       │ functional-258398 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh       │ functional-258398 ssh -- ls -la /mount-9p                                                                                         │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh       │ functional-258398 ssh sudo umount -f /mount-9p                                                                                    │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ mount     │ -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount1 --alsologtostderr -v=1                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ mount     │ -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount2 --alsologtostderr -v=1                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ mount     │ -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount3 --alsologtostderr -v=1                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ ssh       │ functional-258398 ssh findmnt -T /mount1                                                                                          │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ ssh       │ functional-258398 ssh findmnt -T /mount1                                                                                          │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh       │ functional-258398 ssh findmnt -T /mount2                                                                                          │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh       │ functional-258398 ssh findmnt -T /mount3                                                                                          │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ mount     │ -p functional-258398 --kill=true                                                                                                  │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ start     │ -p functional-258398 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ start     │ -p functional-258398 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                   │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ start     │ -p functional-258398 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-258398 --alsologtostderr -v=1                                                                    │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/07 00:31:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:31:30.995730  328161 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:31:30.995898  328161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:31:30.995911  328161 out.go:374] Setting ErrFile to fd 2...
	I0907 00:31:30.995918  328161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:31:30.996294  328161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 00:31:30.996655  328161 out.go:368] Setting JSON to false
	I0907 00:31:30.997600  328161 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8040,"bootTime":1757197051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0907 00:31:30.997669  328161 start.go:140] virtualization:  
	I0907 00:31:31.000901  328161 out.go:179] * [functional-258398] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0907 00:31:31.017952  328161 notify.go:220] Checking for updates...
	I0907 00:31:31.018065  328161 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 00:31:31.021171  328161 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:31:31.024111  328161 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 00:31:31.027021  328161 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	I0907 00:31:31.029893  328161 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0907 00:31:31.032938  328161 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:31:31.036462  328161 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:31:31.037142  328161 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:31:31.071178  328161 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0907 00:31:31.071301  328161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:31:31.138155  328161 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-07 00:31:31.128009597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:31:31.138267  328161 docker.go:318] overlay module found
	I0907 00:31:31.141470  328161 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0907 00:31:31.144302  328161 start.go:304] selected driver: docker
	I0907 00:31:31.144322  328161 start.go:918] validating driver "docker" against &{Name:functional-258398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-258398 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:31:31.144432  328161 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:31:31.147897  328161 out.go:203] 
	W0907 00:31:31.150946  328161 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0907 00:31:31.153781  328161 out.go:203] 
	
	
	==> CRI-O <==
	Sep 07 00:35:17 functional-258398 crio[4167]: time="2025-09-07 00:35:17.424339008Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=5b645802-8c96-4bd0-bbb0-88fd55a849d9 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:18 functional-258398 crio[4167]: time="2025-09-07 00:35:18.426208919Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7b16d073-98aa-496a-9c2e-0a12417798e6 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:18 functional-258398 crio[4167]: time="2025-09-07 00:35:18.426448428Z" level=info msg="Image docker.io/nginx:alpine not found" id=7b16d073-98aa-496a-9c2e-0a12417798e6 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:29 functional-258398 crio[4167]: time="2025-09-07 00:35:29.424377432Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=859164b5-6820-4b4b-a213-4f5b5589d4ee name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:29 functional-258398 crio[4167]: time="2025-09-07 00:35:29.424648884Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=859164b5-6820-4b4b-a213-4f5b5589d4ee name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:33 functional-258398 crio[4167]: time="2025-09-07 00:35:33.423993614Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=23ca3afb-412c-46e0-abb2-ecaaedaa8823 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:33 functional-258398 crio[4167]: time="2025-09-07 00:35:33.424220946Z" level=info msg="Image docker.io/nginx:alpine not found" id=23ca3afb-412c-46e0-abb2-ecaaedaa8823 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:44 functional-258398 crio[4167]: time="2025-09-07 00:35:44.424926337Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1eb91f89-b3a6-46de-a969-78a9163a3342 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:44 functional-258398 crio[4167]: time="2025-09-07 00:35:44.425202515Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1eb91f89-b3a6-46de-a969-78a9163a3342 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:48 functional-258398 crio[4167]: time="2025-09-07 00:35:48.424853638Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=8aaa0d90-9c67-4003-acfd-1bd2db3bf45e name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:48 functional-258398 crio[4167]: time="2025-09-07 00:35:48.425153546Z" level=info msg="Image docker.io/nginx:alpine not found" id=8aaa0d90-9c67-4003-acfd-1bd2db3bf45e name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:48 functional-258398 crio[4167]: time="2025-09-07 00:35:48.425923537Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=14302962-8201-49ee-a4a5-87a3db5880e8 name=/runtime.v1.ImageService/PullImage
	Sep 07 00:35:48 functional-258398 crio[4167]: time="2025-09-07 00:35:48.428847742Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 07 00:35:49 functional-258398 crio[4167]: time="2025-09-07 00:35:49.424930549Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=e13cbac1-cb41-4d67-992a-478656737e0b name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:49 functional-258398 crio[4167]: time="2025-09-07 00:35:49.425227635Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=e13cbac1-cb41-4d67-992a-478656737e0b name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:56 functional-258398 crio[4167]: time="2025-09-07 00:35:56.424248612Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=0d001554-66a2-471f-a46d-9579a28058c5 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:35:56 functional-258398 crio[4167]: time="2025-09-07 00:35:56.424515379Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=0d001554-66a2-471f-a46d-9579a28058c5 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:36:02 functional-258398 crio[4167]: time="2025-09-07 00:36:02.424376087Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=c273ff6d-7172-433e-b0e1-b9ee8453b167 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:36:02 functional-258398 crio[4167]: time="2025-09-07 00:36:02.424653488Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=c273ff6d-7172-433e-b0e1-b9ee8453b167 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:36:17 functional-258398 crio[4167]: time="2025-09-07 00:36:17.424557028Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=5b49525e-6670-4e16-ae89-6e51e027f51c name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:36:17 functional-258398 crio[4167]: time="2025-09-07 00:36:17.424972908Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=5b49525e-6670-4e16-ae89-6e51e027f51c name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:36:18 functional-258398 crio[4167]: time="2025-09-07 00:36:18.721159776Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=60b625de-ee0f-42d0-8026-1922ad5a4f9a name=/runtime.v1.ImageService/PullImage
	Sep 07 00:36:18 functional-258398 crio[4167]: time="2025-09-07 00:36:18.722340986Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 07 00:36:30 functional-258398 crio[4167]: time="2025-09-07 00:36:30.424756900Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=c248a730-c82e-477a-86ac-67066e1a03bb name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:36:30 functional-258398 crio[4167]: time="2025-09-07 00:36:30.425014018Z" level=info msg="Image docker.io/nginx:alpine not found" id=c248a730-c82e-477a-86ac-67066e1a03bb name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b423e4487971       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   579c1baf79929       busybox-mount
	5a0731b973a06       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      15 minutes ago      Running             kindnet-cni               2                   6327ce791bce0       kindnet-rdhlv
	c4bf1f0f08774       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                      15 minutes ago      Running             kube-proxy                2                   c656aaca1c898       kube-proxy-7m6lc
	4886b47b59b8e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 minutes ago      Running             coredns                   2                   d2330709f98e3       coredns-66bc5c9577-zq2c7
	111784a401976       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 minutes ago      Running             storage-provisioner       2                   fc9740565f3c3       storage-provisioner
	c38cff0c3281b       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                      15 minutes ago      Running             kube-apiserver            0                   03737861d8eb0       kube-apiserver-functional-258398
	26716712168ad       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                      15 minutes ago      Running             kube-scheduler            2                   c82b517fa6bed       kube-scheduler-functional-258398
	0710bd2c664cc       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                      15 minutes ago      Running             kube-controller-manager   2                   78597cbaec082       kube-controller-manager-functional-258398
	3812c18eb6f1e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      15 minutes ago      Running             etcd                      2                   86836b9694482       etcd-functional-258398
	997b2d26cc857       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      16 minutes ago      Exited              etcd                      1                   86836b9694482       etcd-functional-258398
	37a1efe2c040b       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                      16 minutes ago      Exited              kube-proxy                1                   c656aaca1c898       kube-proxy-7m6lc
	bfeeacf134e42       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                      16 minutes ago      Exited              kube-controller-manager   1                   78597cbaec082       kube-controller-manager-functional-258398
	b443422dc44b2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      16 minutes ago      Exited              kindnet-cni               1                   6327ce791bce0       kindnet-rdhlv
	b465a21ac08f6       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                      16 minutes ago      Exited              kube-scheduler            1                   c82b517fa6bed       kube-scheduler-functional-258398
	e583e4d6e6fa3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      16 minutes ago      Exited              storage-provisioner       1                   fc9740565f3c3       storage-provisioner
	d6af83dec4758       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      16 minutes ago      Exited              coredns                   1                   d2330709f98e3       coredns-66bc5c9577-zq2c7
	
	
	==> coredns [4886b47b59b8e7541c3771076dbc12aea68d014d9055c0dee94fa923fda17af6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59688 - 41583 "HINFO IN 3897342971655153967.1326956596793676297. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.070203789s
	
	
	==> coredns [d6af83dec4758df3d40a670815100c1ff162fda2953d0e2c204b8645a7a471b3] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51420 - 34339 "HINFO IN 6664165119936110056.6222209699285952715. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016530267s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-258398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-258398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=196d69ba373adb3ed4fbcc87dc5d81b7f1adbb1d
	                    minikube.k8s.io/name=functional-258398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_07T00_18_49_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Sep 2025 00:18:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-258398
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Sep 2025 00:36:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Sep 2025 00:36:12 +0000   Sun, 07 Sep 2025 00:18:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Sep 2025 00:36:12 +0000   Sun, 07 Sep 2025 00:18:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Sep 2025 00:36:12 +0000   Sun, 07 Sep 2025 00:18:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Sep 2025 00:36:12 +0000   Sun, 07 Sep 2025 00:19:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-258398
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e30938cabb64a8891f32345a05d15d3
	  System UUID:                a26dd3f2-2063-4dec-b1a6-8a59580be599
	  Boot ID:                    beae285a-afb1-41fb-a1c4-2915721f6659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-p29gr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-node-connect-7d85dfc575-rsgjm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 coredns-66bc5c9577-zq2c7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-functional-258398                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-rdhlv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-functional-258398              250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-258398     200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-7m6lc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-258398              100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-2gcdg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-chzl2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m (x9 over 17m)  kubelet          Node functional-258398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node functional-258398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node functional-258398 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node functional-258398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node functional-258398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                kubelet          Node functional-258398 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                node-controller  Node functional-258398 event: Registered Node functional-258398 in Controller
	  Normal   NodeReady                16m                kubelet          Node functional-258398 status is now: NodeReady
	  Normal   RegisteredNode           16m                node-controller  Node functional-258398 event: Registered Node functional-258398 in Controller
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-258398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-258398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)  kubelet          Node functional-258398 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node functional-258398 event: Registered Node functional-258398 in Controller
	
	
	==> dmesg <==
	[Sep 6 22:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.013704] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510843] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033312] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.768135] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.749154] kauditd_printk_skb: 36 callbacks suppressed
	[Sep 6 23:22] hrtimer: interrupt took 27192686 ns
	[Sep 7 00:09] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [3812c18eb6f1e4dbbd6fc70678a89a0fe64fecd4886f68a9e65ab3f7f1b1e4b1] <==
	{"level":"warn","ts":"2025-09-07T00:20:35.361033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.377795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.395225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.414387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.452264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.470221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.492390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.508417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.529641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.545299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.568147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.582005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.600775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.610218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.634345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.685361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.697569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.744857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.848907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40804","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-07T00:30:34.290274Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1033}
	{"level":"info","ts":"2025-09-07T00:30:34.313825Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1033,"took":"23.260813ms","hash":274688013,"current-db-size-bytes":3088384,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1302528,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-07T00:30:34.313877Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":274688013,"revision":1033,"compact-revision":-1}
	{"level":"info","ts":"2025-09-07T00:35:34.301218Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1361}
	{"level":"info","ts":"2025-09-07T00:35:34.305612Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1361,"took":"3.687902ms","hash":3927811924,"current-db-size-bytes":3088384,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":2260992,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-09-07T00:35:34.305666Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3927811924,"revision":1361,"compact-revision":1033}
	
	
	==> etcd [997b2d26cc8578cb93e90c9a02727c48df63d896c893ff7a9cf8f5328848caaa] <==
	{"level":"warn","ts":"2025-09-07T00:19:50.878601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:50.904040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:50.935548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:50.981661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:51.016879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:51.061299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:51.134333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36582","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-07T00:20:16.768639Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-07T00:20:16.768759Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-258398","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-07T00:20:16.769628Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-07T00:20:16.769785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-07T00:20:16.906230Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-07T00:20:16.906302Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-07T00:20:16.906396Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-07T00:20:16.906424Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-07T00:20:16.906433Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-07T00:20:16.906397Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-07T00:20:16.906456Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-07T00:20:16.906415Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-07T00:20:16.906559Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-07T00:20:16.906571Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-07T00:20:16.910796Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-07T00:20:16.910886Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-07T00:20:16.910918Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-07T00:20:16.910925Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-258398","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 00:36:32 up  2:19,  0 users,  load average: 0.41, 0.37, 1.05
	Linux functional-258398 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [5a0731b973a0698a6905641675dbe03ce2d15ce6065f8ef93792af6101604349] <==
	I0907 00:34:28.144329       1 main.go:301] handling current node
	I0907 00:34:38.144930       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:34:38.144964       1 main.go:301] handling current node
	I0907 00:34:48.141461       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:34:48.141498       1 main.go:301] handling current node
	I0907 00:34:58.148430       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:34:58.148462       1 main.go:301] handling current node
	I0907 00:35:08.141440       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:35:08.141496       1 main.go:301] handling current node
	I0907 00:35:18.142512       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:35:18.142550       1 main.go:301] handling current node
	I0907 00:35:28.144443       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:35:28.144571       1 main.go:301] handling current node
	I0907 00:35:38.143618       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:35:38.143940       1 main.go:301] handling current node
	I0907 00:35:48.142140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:35:48.142173       1 main.go:301] handling current node
	I0907 00:35:58.146311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:35:58.146347       1 main.go:301] handling current node
	I0907 00:36:08.142511       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:36:08.142547       1 main.go:301] handling current node
	I0907 00:36:18.144890       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:36:18.145017       1 main.go:301] handling current node
	I0907 00:36:28.144979       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:36:28.145017       1 main.go:301] handling current node
	
	
	==> kindnet [b443422dc44b238ac8bf8aebfd292ce8217f9df9c0a84c02e9aa8b2c94165428] <==
	I0907 00:19:47.940846       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0907 00:19:47.949081       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0907 00:19:47.949235       1 main.go:148] setting mtu 1500 for CNI 
	I0907 00:19:47.949255       1 main.go:178] kindnetd IP family: "ipv4"
	I0907 00:19:47.949269       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-07T00:19:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0907 00:19:48.173097       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0907 00:19:48.173134       1 controller.go:381] "Waiting for informer caches to sync"
	I0907 00:19:48.173144       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0907 00:19:48.173296       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0907 00:19:52.274291       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0907 00:19:52.274398       1 metrics.go:72] Registering metrics
	I0907 00:19:52.274510       1 controller.go:711] "Syncing nftables rules"
	I0907 00:19:58.154266       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:19:58.154321       1 main.go:301] handling current node
	I0907 00:20:08.154793       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:20:08.154862       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c38cff0c3281b8444ca0b2dfd923df7bb0db6a01d9ae78c16747f8aebcb0febe] <==
	I0907 00:24:26.694327       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:25:25.138802       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:25:55.810148       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:26:32.099439       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:26:58.123137       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:28:02.002416       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:28:16.258584       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:29:08.453095       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:29:27.465363       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:30:34.287792       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:30:36.823500       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0907 00:30:38.010011       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:31:12.198191       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.25.162"}
	I0907 00:31:32.194867       1 controller.go:667] quota admission added evaluator for: namespaces
	I0907 00:31:32.485026       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.201.10"}
	I0907 00:31:32.505714       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.34.30"}
	I0907 00:31:39.380464       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:31:41.041987       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:32:41.084176       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:32:58.208178       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:33:49.861601       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:34:25.590380       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:34:58.973087       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:35:53.080191       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:36:24.030896       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [0710bd2c664ccab6bbd3da99d825a4d296c0b65e7e338b1d65d401f19a44afb7] <==
	I0907 00:20:40.312700       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0907 00:20:40.317895       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0907 00:20:40.317914       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0907 00:20:40.317925       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0907 00:20:40.317946       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0907 00:20:40.317958       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0907 00:20:40.317969       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0907 00:20:40.318877       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0907 00:20:40.318897       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0907 00:20:40.318990       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0907 00:20:40.330521       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0907 00:20:40.330734       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0907 00:20:40.330882       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-258398"
	I0907 00:20:40.330961       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0907 00:20:40.333090       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0907 00:20:40.333393       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0907 00:20:40.333490       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0907 00:20:40.353701       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	E0907 00:31:32.296252       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.314109       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.325536       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.331760       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.335369       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.341820       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.345590       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [bfeeacf134e42a28eea9f304b94bb9e9e8dbe00f1e3b5553b3d3370dcbe2853c] <==
	I0907 00:19:55.455541       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0907 00:19:55.455727       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0907 00:19:55.457221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0907 00:19:55.458361       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0907 00:19:55.458606       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0907 00:19:55.462131       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0907 00:19:55.465289       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0907 00:19:55.465298       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0907 00:19:55.465364       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0907 00:19:55.465407       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0907 00:19:55.465417       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0907 00:19:55.465424       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0907 00:19:55.467835       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0907 00:19:55.470084       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0907 00:19:55.471268       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0907 00:19:55.488457       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0907 00:19:55.500675       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0907 00:19:55.505391       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0907 00:19:55.505460       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0907 00:19:55.505480       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0907 00:19:55.505391       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0907 00:19:55.505411       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0907 00:19:55.505937       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0907 00:19:55.508856       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0907 00:19:55.517230       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [37a1efe2c040b1d9b0c64308dcaf79efb3d981d5ea69fdc38ef5642da1edd315] <==
	I0907 00:19:51.541116       1 server_linux.go:53] "Using iptables proxy"
	I0907 00:19:51.749438       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0907 00:19:52.350486       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0907 00:19:52.350555       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0907 00:19:52.350730       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0907 00:19:52.409464       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0907 00:19:52.409523       1 server_linux.go:132] "Using iptables Proxier"
	I0907 00:19:52.423028       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0907 00:19:52.423429       1 server.go:527] "Version info" version="v1.34.0"
	I0907 00:19:52.423482       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:19:52.425248       1 config.go:200] "Starting service config controller"
	I0907 00:19:52.425276       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0907 00:19:52.431928       1 config.go:106] "Starting endpoint slice config controller"
	I0907 00:19:52.432023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0907 00:19:52.432070       1 config.go:403] "Starting serviceCIDR config controller"
	I0907 00:19:52.432116       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0907 00:19:52.433960       1 config.go:309] "Starting node config controller"
	I0907 00:19:52.434061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0907 00:19:52.434094       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0907 00:19:52.525418       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0907 00:19:52.532178       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0907 00:19:52.532479       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c4bf1f0f0877404b2812bace2ceb454913aa5dad2c799c8a0846b63b2ee25ca1] <==
	I0907 00:20:38.005461       1 server_linux.go:53] "Using iptables proxy"
	I0907 00:20:38.125326       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0907 00:20:38.225601       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0907 00:20:38.225640       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0907 00:20:38.225731       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0907 00:20:38.258244       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0907 00:20:38.258366       1 server_linux.go:132] "Using iptables Proxier"
	I0907 00:20:38.263385       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0907 00:20:38.263817       1 server.go:527] "Version info" version="v1.34.0"
	I0907 00:20:38.264340       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:20:38.266068       1 config.go:200] "Starting service config controller"
	I0907 00:20:38.272675       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0907 00:20:38.266468       1 config.go:106] "Starting endpoint slice config controller"
	I0907 00:20:38.272710       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0907 00:20:38.266492       1 config.go:403] "Starting serviceCIDR config controller"
	I0907 00:20:38.272722       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0907 00:20:38.272728       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0907 00:20:38.267195       1 config.go:309] "Starting node config controller"
	I0907 00:20:38.272736       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0907 00:20:38.272741       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0907 00:20:38.373367       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0907 00:20:38.373367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [26716712168ad4de72f836c6536cc652a7b48ec035f18810829a45a8309eb0fc] <==
	I0907 00:20:35.314680       1 serving.go:386] Generated self-signed cert in-memory
	I0907 00:20:37.932613       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0907 00:20:37.934546       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:20:37.955408       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0907 00:20:37.955612       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0907 00:20:37.955676       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0907 00:20:37.955731       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0907 00:20:37.961296       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0907 00:20:37.961394       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0907 00:20:37.961107       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:20:37.962089       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:20:38.056237       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0907 00:20:38.062617       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:20:38.062681       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [b465a21ac08f68c15cd5c2e9eb3b0dbdb157abbea159a6e40a5457376c004ea4] <==
	I0907 00:19:50.208779       1 serving.go:386] Generated self-signed cert in-memory
	W0907 00:19:52.041250       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0907 00:19:52.041372       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0907 00:19:52.041408       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0907 00:19:52.041458       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0907 00:19:52.152553       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0907 00:19:52.152611       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:19:52.183767       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0907 00:19:52.183913       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:19:52.184418       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:19:52.183934       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0907 00:19:52.193199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found, role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found]" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0907 00:19:52.257317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0907 00:19:52.257509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0907 00:19:52.257624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0907 00:19:52.257985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0907 00:19:52.258137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I0907 00:19:53.785228       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:20:16.762344       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0907 00:20:16.762514       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0907 00:20:16.762538       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0907 00:20:16.762579       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:20:16.763202       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0907 00:20:16.763234       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 07 00:36:18 functional-258398 kubelet[4454]: E0907 00:36:18.720685    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="523eee7f-cb23-4b14-80b1-ef07ee3a6991"
	Sep 07 00:36:22 functional-258398 kubelet[4454]: E0907 00:36:22.757189    4454 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757205382756938026 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:174268} inodes_used:{value:88}}"
	Sep 07 00:36:22 functional-258398 kubelet[4454]: E0907 00:36:22.757224    4454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757205382756938026 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:174268} inodes_used:{value:88}}"
	Sep 07 00:36:23 functional-258398 kubelet[4454]: E0907 00:36:23.424546    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-p29gr" podUID="f818d25d-5bb5-4279-ab52-48681998cbe4"
	Sep 07 00:36:30 functional-258398 kubelet[4454]: E0907 00:36:30.425282    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="523eee7f-cb23-4b14-80b1-ef07ee3a6991"
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.572112    4454 manager.go:1116] Failed to create existing container: /docker/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/crio-05c189d746f36b8790f54808d2b0c31437fe8558ecb8da9790c03620c4d5a425: Error finding container 05c189d746f36b8790f54808d2b0c31437fe8558ecb8da9790c03620c4d5a425: Status 404 returned error can't find the container with id 05c189d746f36b8790f54808d2b0c31437fe8558ecb8da9790c03620c4d5a425
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.572600    4454 manager.go:1116] Failed to create existing container: /docker/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/crio-c656aaca1c898b44d5c625fe5a6741cbb563e2bf7b3d40f385ed7aa5eb87b7ba: Error finding container c656aaca1c898b44d5c625fe5a6741cbb563e2bf7b3d40f385ed7aa5eb87b7ba: Status 404 returned error can't find the container with id c656aaca1c898b44d5c625fe5a6741cbb563e2bf7b3d40f385ed7aa5eb87b7ba
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.572802    4454 manager.go:1116] Failed to create existing container: /crio-c82b517fa6bed054801677cd8ece33888d71ad3028888d376af284f8618a7d9a: Error finding container c82b517fa6bed054801677cd8ece33888d71ad3028888d376af284f8618a7d9a: Status 404 returned error can't find the container with id c82b517fa6bed054801677cd8ece33888d71ad3028888d376af284f8618a7d9a
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.573123    4454 manager.go:1116] Failed to create existing container: /crio-c656aaca1c898b44d5c625fe5a6741cbb563e2bf7b3d40f385ed7aa5eb87b7ba: Error finding container c656aaca1c898b44d5c625fe5a6741cbb563e2bf7b3d40f385ed7aa5eb87b7ba: Status 404 returned error can't find the container with id c656aaca1c898b44d5c625fe5a6741cbb563e2bf7b3d40f385ed7aa5eb87b7ba
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.573277    4454 manager.go:1116] Failed to create existing container: /crio-78597cbaec082f78623c5a32c656e5ffc1e0cf054580d37e86414c855ea23161: Error finding container 78597cbaec082f78623c5a32c656e5ffc1e0cf054580d37e86414c855ea23161: Status 404 returned error can't find the container with id 78597cbaec082f78623c5a32c656e5ffc1e0cf054580d37e86414c855ea23161
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.573430    4454 manager.go:1116] Failed to create existing container: /crio-6327ce791bce0a1a46ce7d609c401d5638c7359a43f6fa332435abfc706930f7: Error finding container 6327ce791bce0a1a46ce7d609c401d5638c7359a43f6fa332435abfc706930f7: Status 404 returned error can't find the container with id 6327ce791bce0a1a46ce7d609c401d5638c7359a43f6fa332435abfc706930f7
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.573582    4454 manager.go:1116] Failed to create existing container: /docker/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/crio-86836b9694482f49ae9f6c7170ddb3f4d10f0d2be261fc403b6b97929ab97c97: Error finding container 86836b9694482f49ae9f6c7170ddb3f4d10f0d2be261fc403b6b97929ab97c97: Status 404 returned error can't find the container with id 86836b9694482f49ae9f6c7170ddb3f4d10f0d2be261fc403b6b97929ab97c97
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.573723    4454 manager.go:1116] Failed to create existing container: /docker/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/crio-d2330709f98e35a982097e7e93617e659d2c22fb161438b67e1d574534ed72ec: Error finding container d2330709f98e35a982097e7e93617e659d2c22fb161438b67e1d574534ed72ec: Status 404 returned error can't find the container with id d2330709f98e35a982097e7e93617e659d2c22fb161438b67e1d574534ed72ec
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.573866    4454 manager.go:1116] Failed to create existing container: /crio-05c189d746f36b8790f54808d2b0c31437fe8558ecb8da9790c03620c4d5a425: Error finding container 05c189d746f36b8790f54808d2b0c31437fe8558ecb8da9790c03620c4d5a425: Status 404 returned error can't find the container with id 05c189d746f36b8790f54808d2b0c31437fe8558ecb8da9790c03620c4d5a425
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.574122    4454 manager.go:1116] Failed to create existing container: /crio-86836b9694482f49ae9f6c7170ddb3f4d10f0d2be261fc403b6b97929ab97c97: Error finding container 86836b9694482f49ae9f6c7170ddb3f4d10f0d2be261fc403b6b97929ab97c97: Status 404 returned error can't find the container with id 86836b9694482f49ae9f6c7170ddb3f4d10f0d2be261fc403b6b97929ab97c97
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.574296    4454 manager.go:1116] Failed to create existing container: /docker/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/crio-fc9740565f3c30e8129ba033cd21292f0227adc4f4fc53d557d0fc7cd8dfb1ec: Error finding container fc9740565f3c30e8129ba033cd21292f0227adc4f4fc53d557d0fc7cd8dfb1ec: Status 404 returned error can't find the container with id fc9740565f3c30e8129ba033cd21292f0227adc4f4fc53d557d0fc7cd8dfb1ec
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.574444    4454 manager.go:1116] Failed to create existing container: /crio-fc9740565f3c30e8129ba033cd21292f0227adc4f4fc53d557d0fc7cd8dfb1ec: Error finding container fc9740565f3c30e8129ba033cd21292f0227adc4f4fc53d557d0fc7cd8dfb1ec: Status 404 returned error can't find the container with id fc9740565f3c30e8129ba033cd21292f0227adc4f4fc53d557d0fc7cd8dfb1ec
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.574591    4454 manager.go:1116] Failed to create existing container: /docker/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/crio-6327ce791bce0a1a46ce7d609c401d5638c7359a43f6fa332435abfc706930f7: Error finding container 6327ce791bce0a1a46ce7d609c401d5638c7359a43f6fa332435abfc706930f7: Status 404 returned error can't find the container with id 6327ce791bce0a1a46ce7d609c401d5638c7359a43f6fa332435abfc706930f7
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.574734    4454 manager.go:1116] Failed to create existing container: /crio-d2330709f98e35a982097e7e93617e659d2c22fb161438b67e1d574534ed72ec: Error finding container d2330709f98e35a982097e7e93617e659d2c22fb161438b67e1d574534ed72ec: Status 404 returned error can't find the container with id d2330709f98e35a982097e7e93617e659d2c22fb161438b67e1d574534ed72ec
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.574871    4454 manager.go:1116] Failed to create existing container: /docker/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/crio-78597cbaec082f78623c5a32c656e5ffc1e0cf054580d37e86414c855ea23161: Error finding container 78597cbaec082f78623c5a32c656e5ffc1e0cf054580d37e86414c855ea23161: Status 404 returned error can't find the container with id 78597cbaec082f78623c5a32c656e5ffc1e0cf054580d37e86414c855ea23161
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.575015    4454 manager.go:1116] Failed to create existing container: /docker/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/crio-c82b517fa6bed054801677cd8ece33888d71ad3028888d376af284f8618a7d9a: Error finding container c82b517fa6bed054801677cd8ece33888d71ad3028888d376af284f8618a7d9a: Status 404 returned error can't find the container with id c82b517fa6bed054801677cd8ece33888d71ad3028888d376af284f8618a7d9a
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.576195    4454 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/59d9f63b55543517499156ef3df087716e07e3b45b5648739e9d9f3aef044080/diff" to get inode usage: stat /var/lib/containers/storage/overlay/59d9f63b55543517499156ef3df087716e07e3b45b5648739e9d9f3aef044080/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.592996    4454 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/59d9f63b55543517499156ef3df087716e07e3b45b5648739e9d9f3aef044080/diff" to get inode usage: stat /var/lib/containers/storage/overlay/59d9f63b55543517499156ef3df087716e07e3b45b5648739e9d9f3aef044080/diff: no such file or directory, extraDiskErr: <nil>
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.759182    4454 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757205392758862359 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:174268} inodes_used:{value:88}}"
	Sep 07 00:36:32 functional-258398 kubelet[4454]: E0907 00:36:32.759213    4454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757205392758862359 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:174268} inodes_used:{value:88}}"
	
	
	==> storage-provisioner [111784a401976f2a5a986423114642f4743638d18cd4a583a585524ae113d149] <==
	W0907 00:36:07.621266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:09.624270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:09.628981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:11.632368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:11.637122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:13.640477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:13.645550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:15.648385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:15.654980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:17.657842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:17.664467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:19.667502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:19.671904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:21.676234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:21.681756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:23.684703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:23.691223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:25.694526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:25.699142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:27.702125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:27.706417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:29.709544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:29.713817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:31.717523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:36:31.725009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e583e4d6e6fa321b536b9585b078a2495f4006697027b65b59a97bc290776685] <==
	I0907 00:19:48.412721       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0907 00:19:52.282855       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0907 00:19:52.282915       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0907 00:19:52.336994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:19:55.792072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:00.058055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:03.656923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:06.710136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:09.732740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:09.740805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0907 00:20:09.741329       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0907 00:20:09.741766       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc25c257-b99c-4b2e-a079-2ab17a965bfd", APIVersion:"v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-258398_f27cd163-c8b9-4921-b22b-4f71810f255b became leader
	I0907 00:20:09.742047       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-258398_f27cd163-c8b9-4921-b22b-4f71810f255b!
	W0907 00:20:09.757145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:09.759989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0907 00:20:09.843963       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-258398_f27cd163-c8b9-4921-b22b-4f71810f255b!
	W0907 00:20:11.763392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:11.768036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:13.812948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:13.823465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:15.827850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:15.836477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-258398 -n functional-258398
helpers_test.go:269: (dbg) Run:  kubectl --context functional-258398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-p29gr hello-node-connect-7d85dfc575-rsgjm nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2gcdg kubernetes-dashboard-855c9754f9-chzl2
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-258398 describe pod busybox-mount hello-node-75c85bcc94-p29gr hello-node-connect-7d85dfc575-rsgjm nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2gcdg kubernetes-dashboard-855c9754f9-chzl2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-258398 describe pod busybox-mount hello-node-75c85bcc94-p29gr hello-node-connect-7d85dfc575-rsgjm nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2gcdg kubernetes-dashboard-855c9754f9-chzl2: exit status 1 (129.010669ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:31:19 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://6b423e44879712682b01af3a82faf05d30be5d1c7f309a2ef0b8bb741c3bd2c2
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Sep 2025 00:31:23 +0000
	      Finished:     Sun, 07 Sep 2025 00:31:23 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kfh4b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kfh4b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m13s  default-scheduler  Successfully assigned default/busybox-mount to functional-258398
	  Normal  Pulling    5m13s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.449s (3.449s including waiting). Image size: 3774172 bytes.
	  Normal  Created    5m10s  kubelet            Created container: mount-munger
	  Normal  Started    5m9s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-p29gr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:21:06 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sdvdd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sdvdd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  15m                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-p29gr to functional-258398
	  Normal   Pulling    11m (x5 over 15m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     11m (x5 over 15m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     11m (x5 over 15m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    24s (x50 over 15m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     24s (x50 over 15m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-rsgjm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:31:12 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-989l6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-989l6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m21s                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rsgjm to functional-258398
	  Warning  Failed     119s (x4 over 5m21s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     119s (x4 over 5m21s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    48s (x9 over 5m21s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     48s (x9 over 5m21s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    36s (x5 over 5m21s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:21:11 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-glkmv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-glkmv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  15m                   default-scheduler  Successfully assigned default/nginx-svc to functional-258398
	  Warning  Failed     14m                   kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12m (x2 over 13m)     kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    9m49s (x5 over 15m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     9m19s (x5 over 14m)   kubelet            Error: ErrImagePull
	  Warning  Failed     9m19s (x2 over 11m)   kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m20s (x29 over 14m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x45 over 14m)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:27:12 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cvc2w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-cvc2w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m21s                  default-scheduler  Successfully assigned default/sp-pod to functional-258398
	  Warning  Failed     7m4s                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m4s (x5 over 9m20s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m (x4 over 8m48s)     kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m (x5 over 8m48s)     kubelet            Error: ErrImagePull
	  Warning  Failed     104s (x16 over 8m48s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    39s (x21 over 8m48s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-2gcdg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-chzl2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-258398 describe pod busybox-mount hello-node-75c85bcc94-p29gr hello-node-connect-7d85dfc575-rsgjm nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2gcdg kubernetes-dashboard-855c9754f9-chzl2: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-258398 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-258398 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-rsgjm" [694a65d3-ecd4-45a3-8278-a08f57038389] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-258398 -n functional-258398
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-07 00:41:12.534565454 +0000 UTC m=+1889.965425960
functional_test.go:1645: (dbg) Run:  kubectl --context functional-258398 describe po hello-node-connect-7d85dfc575-rsgjm -n default
functional_test.go:1645: (dbg) kubectl --context functional-258398 describe po hello-node-connect-7d85dfc575-rsgjm -n default:
Name:             hello-node-connect-7d85dfc575-rsgjm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-258398/192.168.49.2
Start Time:       Sun, 07 Sep 2025 00:31:12 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-989l6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-989l6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rsgjm to functional-258398
Normal   Pulling    5m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     4m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     4m9s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     2m38s (x16 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    90s (x21 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-258398 logs hello-node-connect-7d85dfc575-rsgjm -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-258398 logs hello-node-connect-7d85dfc575-rsgjm -n default: exit status 1 (100.992011ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-rsgjm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-258398 logs hello-node-connect-7d85dfc575-rsgjm -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-258398 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-rsgjm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-258398/192.168.49.2
Start Time:       Sun, 07 Sep 2025 00:31:12 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-989l6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-989l6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rsgjm to functional-258398
Normal   Pulling    5m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     4m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     4m9s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     2m38s (x16 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    90s (x21 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-258398 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-258398 logs -l app=hello-node-connect: exit status 1 (81.866882ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-rsgjm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-258398 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-258398 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.25.162
IPs:                      10.100.25.162
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31625/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-258398
helpers_test.go:243: (dbg) docker inspect functional-258398:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515",
	        "Created": "2025-09-07T00:18:23.454660871Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315085,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-07T00:18:23.512135033Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/hosts",
	        "LogPath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515-json.log",
	        "Name": "/functional-258398",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-258398:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-258398",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515",
	                "LowerDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8-init/diff:/var/lib/docker/overlay2/5a4b8b8cbe09f4c7d8197d949f1b03b5a8d427ad9c5a27d0359fd04ab981afab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-258398",
	                "Source": "/var/lib/docker/volumes/functional-258398/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-258398",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-258398",
	                "name.minikube.sigs.k8s.io": "functional-258398",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "70d1c9b2d70f562f4d6c7d385513bf5d0eaaacc02e705345847c2caf08d1de2c",
	            "SandboxKey": "/var/run/docker/netns/70d1c9b2d70f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-258398": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:fb:08:fa:de:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec956df0ad56bd722eaaaf7f53b6bca29823820d69caf85d31f449a0641cba5a",
	                    "EndpointID": "6323a4eda35746f28e310761f04758eede91222887d1875dbe5ab9375cae0ecb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-258398",
	                        "5b933cc290a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-258398 -n functional-258398
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 logs -n 25: (1.725203804s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-258398 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh            │ functional-258398 ssh -- ls -la /mount-9p                                                                          │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh            │ functional-258398 ssh sudo umount -f /mount-9p                                                                     │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ mount          │ -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount1 --alsologtostderr -v=1 │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ mount          │ -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount2 --alsologtostderr -v=1 │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ mount          │ -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount3 --alsologtostderr -v=1 │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ ssh            │ functional-258398 ssh findmnt -T /mount1                                                                           │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ ssh            │ functional-258398 ssh findmnt -T /mount1                                                                           │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh            │ functional-258398 ssh findmnt -T /mount2                                                                           │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ ssh            │ functional-258398 ssh findmnt -T /mount3                                                                           │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ mount          │ -p functional-258398 --kill=true                                                                                   │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ start          │ -p functional-258398 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ start          │ -p functional-258398 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ start          │ -p functional-258398 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-258398 --alsologtostderr -v=1                                                     │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ update-context │ functional-258398 update-context --alsologtostderr -v=2                                                            │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │ 07 Sep 25 00:36 UTC │
	│ update-context │ functional-258398 update-context --alsologtostderr -v=2                                                            │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │ 07 Sep 25 00:36 UTC │
	│ update-context │ functional-258398 update-context --alsologtostderr -v=2                                                            │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │ 07 Sep 25 00:36 UTC │
	│ image          │ functional-258398 image ls --format short --alsologtostderr                                                        │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │ 07 Sep 25 00:36 UTC │
	│ image          │ functional-258398 image ls --format yaml --alsologtostderr                                                         │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │ 07 Sep 25 00:36 UTC │
	│ ssh            │ functional-258398 ssh pgrep buildkitd                                                                              │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │                     │
	│ image          │ functional-258398 image build -t localhost/my-image:functional-258398 testdata/build --alsologtostderr             │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │ 07 Sep 25 00:36 UTC │
	│ image          │ functional-258398 image ls                                                                                         │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │ 07 Sep 25 00:36 UTC │
	│ image          │ functional-258398 image ls --format json --alsologtostderr                                                         │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │ 07 Sep 25 00:36 UTC │
	│ image          │ functional-258398 image ls --format table --alsologtostderr                                                        │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │ 07 Sep 25 00:36 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/07 00:31:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:31:30.995730  328161 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:31:30.995898  328161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:31:30.995911  328161 out.go:374] Setting ErrFile to fd 2...
	I0907 00:31:30.995918  328161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:31:30.996294  328161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 00:31:30.996655  328161 out.go:368] Setting JSON to false
	I0907 00:31:30.997600  328161 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8040,"bootTime":1757197051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0907 00:31:30.997669  328161 start.go:140] virtualization:  
	I0907 00:31:31.000901  328161 out.go:179] * [functional-258398] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0907 00:31:31.017952  328161 notify.go:220] Checking for updates...
	I0907 00:31:31.018065  328161 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 00:31:31.021171  328161 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:31:31.024111  328161 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 00:31:31.027021  328161 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	I0907 00:31:31.029893  328161 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0907 00:31:31.032938  328161 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:31:31.036462  328161 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:31:31.037142  328161 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:31:31.071178  328161 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0907 00:31:31.071301  328161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:31:31.138155  328161 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-07 00:31:31.128009597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:31:31.138267  328161 docker.go:318] overlay module found
	I0907 00:31:31.141470  328161 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0907 00:31:31.144302  328161 start.go:304] selected driver: docker
	I0907 00:31:31.144322  328161 start.go:918] validating driver "docker" against &{Name:functional-258398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-258398 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:31:31.144432  328161 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:31:31.147897  328161 out.go:203] 
	W0907 00:31:31.150946  328161 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0907 00:31:31.153781  328161 out.go:203] 
	
	
	==> CRI-O <==
	Sep 07 00:40:23 functional-258398 crio[4167]: time="2025-09-07 00:40:23.424455143Z" level=info msg="Image docker.io/nginx:alpine not found" id=d347b9ac-fc85-46be-8735-8013e4dc3769 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:26 functional-258398 crio[4167]: time="2025-09-07 00:40:26.425338254Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=9c98c73a-e6e1-49f0-8ffb-ace64e322abb name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:26 functional-258398 crio[4167]: time="2025-09-07 00:40:26.425618684Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=9c98c73a-e6e1-49f0-8ffb-ace64e322abb name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:34 functional-258398 crio[4167]: time="2025-09-07 00:40:34.424464096Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9320f7c0-bd8d-4e12-8522-14936a0fdde4 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:34 functional-258398 crio[4167]: time="2025-09-07 00:40:34.424799451Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9320f7c0-bd8d-4e12-8522-14936a0fdde4 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:38 functional-258398 crio[4167]: time="2025-09-07 00:40:38.424235249Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=25b7a764-bb75-4f7d-a19f-558a419cf921 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:38 functional-258398 crio[4167]: time="2025-09-07 00:40:38.424463222Z" level=info msg="Image docker.io/nginx:alpine not found" id=25b7a764-bb75-4f7d-a19f-558a419cf921 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:40 functional-258398 crio[4167]: time="2025-09-07 00:40:40.424779423Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=8ce56ccf-c1fa-4584-972f-f795da4a5eef name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:40 functional-258398 crio[4167]: time="2025-09-07 00:40:40.425533900Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=8ce56ccf-c1fa-4584-972f-f795da4a5eef name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:45 functional-258398 crio[4167]: time="2025-09-07 00:40:45.423958643Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c08b2841-950a-449e-ae0e-73ca6bbbdae2 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:45 functional-258398 crio[4167]: time="2025-09-07 00:40:45.424243011Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=c08b2841-950a-449e-ae0e-73ca6bbbdae2 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:49 functional-258398 crio[4167]: time="2025-09-07 00:40:49.424870602Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b92dc323-d151-4a4a-a6d6-0b0d38268492 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:49 functional-258398 crio[4167]: time="2025-09-07 00:40:49.425090395Z" level=info msg="Image docker.io/nginx:alpine not found" id=b92dc323-d151-4a4a-a6d6-0b0d38268492 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:52 functional-258398 crio[4167]: time="2025-09-07 00:40:52.424951587Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=6a194461-f54f-44bc-8061-b721f6bc0212 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:52 functional-258398 crio[4167]: time="2025-09-07 00:40:52.425213669Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=6a194461-f54f-44bc-8061-b721f6bc0212 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:57 functional-258398 crio[4167]: time="2025-09-07 00:40:57.424623415Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=0f82d6b0-d7fa-45d0-ad41-c47379fcc5d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:40:57 functional-258398 crio[4167]: time="2025-09-07 00:40:57.424915348Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=0f82d6b0-d7fa-45d0-ad41-c47379fcc5d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:41:00 functional-258398 crio[4167]: time="2025-09-07 00:41:00.425482988Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=cd6e4e56-1559-447b-8b09-c919b427e9b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:41:00 functional-258398 crio[4167]: time="2025-09-07 00:41:00.425797593Z" level=info msg="Image docker.io/nginx:alpine not found" id=cd6e4e56-1559-447b-8b09-c919b427e9b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:41:05 functional-258398 crio[4167]: time="2025-09-07 00:41:05.424615596Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=36b112e1-3ffa-40ed-9aef-533df6dd850e name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:41:05 functional-258398 crio[4167]: time="2025-09-07 00:41:05.424912592Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=36b112e1-3ffa-40ed-9aef-533df6dd850e name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:41:12 functional-258398 crio[4167]: time="2025-09-07 00:41:12.425206366Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8a3dec14-f440-4c31-b514-fd2a2aa7e0bf name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:41:12 functional-258398 crio[4167]: time="2025-09-07 00:41:12.425475333Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8a3dec14-f440-4c31-b514-fd2a2aa7e0bf name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:41:13 functional-258398 crio[4167]: time="2025-09-07 00:41:13.425161153Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=768fb22d-19de-4376-93b8-ddf77f5ed564 name=/runtime.v1.ImageService/ImageStatus
	Sep 07 00:41:13 functional-258398 crio[4167]: time="2025-09-07 00:41:13.425386632Z" level=info msg="Image docker.io/nginx:alpine not found" id=768fb22d-19de-4376-93b8-ddf77f5ed564 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b423e4487971       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Exited              mount-munger              0                   579c1baf79929       busybox-mount
	5a0731b973a06       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      20 minutes ago      Running             kindnet-cni               2                   6327ce791bce0       kindnet-rdhlv
	c4bf1f0f08774       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                      20 minutes ago      Running             kube-proxy                2                   c656aaca1c898       kube-proxy-7m6lc
	4886b47b59b8e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      20 minutes ago      Running             coredns                   2                   d2330709f98e3       coredns-66bc5c9577-zq2c7
	111784a401976       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      20 minutes ago      Running             storage-provisioner       2                   fc9740565f3c3       storage-provisioner
	c38cff0c3281b       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                      20 minutes ago      Running             kube-apiserver            0                   03737861d8eb0       kube-apiserver-functional-258398
	26716712168ad       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                      20 minutes ago      Running             kube-scheduler            2                   c82b517fa6bed       kube-scheduler-functional-258398
	0710bd2c664cc       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                      20 minutes ago      Running             kube-controller-manager   2                   78597cbaec082       kube-controller-manager-functional-258398
	3812c18eb6f1e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      20 minutes ago      Running             etcd                      2                   86836b9694482       etcd-functional-258398
	997b2d26cc857       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      21 minutes ago      Exited              etcd                      1                   86836b9694482       etcd-functional-258398
	37a1efe2c040b       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                      21 minutes ago      Exited              kube-proxy                1                   c656aaca1c898       kube-proxy-7m6lc
	bfeeacf134e42       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                      21 minutes ago      Exited              kube-controller-manager   1                   78597cbaec082       kube-controller-manager-functional-258398
	b443422dc44b2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      21 minutes ago      Exited              kindnet-cni               1                   6327ce791bce0       kindnet-rdhlv
	b465a21ac08f6       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                      21 minutes ago      Exited              kube-scheduler            1                   c82b517fa6bed       kube-scheduler-functional-258398
	e583e4d6e6fa3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      21 minutes ago      Exited              storage-provisioner       1                   fc9740565f3c3       storage-provisioner
	d6af83dec4758       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      21 minutes ago      Exited              coredns                   1                   d2330709f98e3       coredns-66bc5c9577-zq2c7
	
	
	==> coredns [4886b47b59b8e7541c3771076dbc12aea68d014d9055c0dee94fa923fda17af6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59688 - 41583 "HINFO IN 3897342971655153967.1326956596793676297. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.070203789s
	
	
	==> coredns [d6af83dec4758df3d40a670815100c1ff162fda2953d0e2c204b8645a7a471b3] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51420 - 34339 "HINFO IN 6664165119936110056.6222209699285952715. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016530267s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-258398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-258398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=196d69ba373adb3ed4fbcc87dc5d81b7f1adbb1d
	                    minikube.k8s.io/name=functional-258398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_07T00_18_49_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Sep 2025 00:18:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-258398
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Sep 2025 00:41:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Sep 2025 00:39:56 +0000   Sun, 07 Sep 2025 00:18:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Sep 2025 00:39:56 +0000   Sun, 07 Sep 2025 00:18:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Sep 2025 00:39:56 +0000   Sun, 07 Sep 2025 00:18:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Sep 2025 00:39:56 +0000   Sun, 07 Sep 2025 00:19:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-258398
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e30938cabb64a8891f32345a05d15d3
	  System UUID:                a26dd3f2-2063-4dec-b1a6-8a59580be599
	  Boot ID:                    beae285a-afb1-41fb-a1c4-2915721f6659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-p29gr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  default                     hello-node-connect-7d85dfc575-rsgjm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-zq2c7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     22m
	  kube-system                 etcd-functional-258398                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         22m
	  kube-system                 kindnet-rdhlv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-functional-258398              250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-functional-258398     200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-7m6lc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-functional-258398              100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-2gcdg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-chzl2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 22m                kube-proxy       
	  Normal   Starting                 20m                kube-proxy       
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 22m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 22m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  22m (x9 over 22m)  kubelet          Node functional-258398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node functional-258398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node functional-258398 status is now: NodeHasSufficientPID
	  Normal   Starting                 22m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 22m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  22m                kubelet          Node functional-258398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22m                kubelet          Node functional-258398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22m                kubelet          Node functional-258398 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           22m                node-controller  Node functional-258398 event: Registered Node functional-258398 in Controller
	  Normal   NodeReady                21m                kubelet          Node functional-258398 status is now: NodeReady
	  Normal   RegisteredNode           21m                node-controller  Node functional-258398 event: Registered Node functional-258398 in Controller
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node functional-258398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node functional-258398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x8 over 20m)  kubelet          Node functional-258398 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           20m                node-controller  Node functional-258398 event: Registered Node functional-258398 in Controller
	
	
	==> dmesg <==
	[Sep 6 22:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.013704] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510843] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033312] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.768135] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.749154] kauditd_printk_skb: 36 callbacks suppressed
	[Sep 6 23:22] hrtimer: interrupt took 27192686 ns
	[Sep 7 00:09] kauditd_printk_skb: 8 callbacks suppressed
	[Sep 7 00:36] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [3812c18eb6f1e4dbbd6fc70678a89a0fe64fecd4886f68a9e65ab3f7f1b1e4b1] <==
	{"level":"warn","ts":"2025-09-07T00:20:35.414387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.452264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.470221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.492390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.508417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.529641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.545299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.568147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.582005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.600775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.610218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.634345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.685361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.697569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.744857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:20:35.848907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40804","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-07T00:30:34.290274Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1033}
	{"level":"info","ts":"2025-09-07T00:30:34.313825Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1033,"took":"23.260813ms","hash":274688013,"current-db-size-bytes":3088384,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1302528,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-07T00:30:34.313877Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":274688013,"revision":1033,"compact-revision":-1}
	{"level":"info","ts":"2025-09-07T00:35:34.301218Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1361}
	{"level":"info","ts":"2025-09-07T00:35:34.305612Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1361,"took":"3.687902ms","hash":3927811924,"current-db-size-bytes":3088384,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":2260992,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-09-07T00:35:34.305666Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3927811924,"revision":1361,"compact-revision":1033}
	{"level":"info","ts":"2025-09-07T00:40:34.309641Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1810}
	{"level":"info","ts":"2025-09-07T00:40:34.315188Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1810,"took":"4.868348ms","hash":1910406592,"current-db-size-bytes":3088384,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":2473984,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2025-09-07T00:40:34.315256Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1910406592,"revision":1810,"compact-revision":1361}
	
	
	==> etcd [997b2d26cc8578cb93e90c9a02727c48df63d896c893ff7a9cf8f5328848caaa] <==
	{"level":"warn","ts":"2025-09-07T00:19:50.878601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:50.904040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:50.935548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:50.981661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:51.016879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:51.061299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-07T00:19:51.134333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36582","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-07T00:20:16.768639Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-07T00:20:16.768759Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-258398","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-07T00:20:16.769628Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-07T00:20:16.769785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-07T00:20:16.906230Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-07T00:20:16.906302Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-07T00:20:16.906396Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-07T00:20:16.906424Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-07T00:20:16.906433Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-07T00:20:16.906397Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-07T00:20:16.906456Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-07T00:20:16.906415Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-07T00:20:16.906559Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-07T00:20:16.906571Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-07T00:20:16.910796Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-07T00:20:16.910886Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-07T00:20:16.910918Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-07T00:20:16.910925Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-258398","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 00:41:14 up  2:23,  0 users,  load average: 0.30, 0.31, 0.85
	Linux functional-258398 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [5a0731b973a0698a6905641675dbe03ce2d15ce6065f8ef93792af6101604349] <==
	I0907 00:39:08.142571       1 main.go:301] handling current node
	I0907 00:39:18.143653       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:39:18.143688       1 main.go:301] handling current node
	I0907 00:39:28.144526       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:39:28.144661       1 main.go:301] handling current node
	I0907 00:39:38.141980       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:39:38.142097       1 main.go:301] handling current node
	I0907 00:39:48.144318       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:39:48.144437       1 main.go:301] handling current node
	I0907 00:39:58.142461       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:39:58.142496       1 main.go:301] handling current node
	I0907 00:40:08.142511       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:40:08.142543       1 main.go:301] handling current node
	I0907 00:40:18.145104       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:40:18.145304       1 main.go:301] handling current node
	I0907 00:40:28.141850       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:40:28.141965       1 main.go:301] handling current node
	I0907 00:40:38.141437       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:40:38.141551       1 main.go:301] handling current node
	I0907 00:40:48.144973       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:40:48.145008       1 main.go:301] handling current node
	I0907 00:40:58.141609       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:40:58.141739       1 main.go:301] handling current node
	I0907 00:41:08.141456       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:41:08.141489       1 main.go:301] handling current node
	
	
	==> kindnet [b443422dc44b238ac8bf8aebfd292ce8217f9df9c0a84c02e9aa8b2c94165428] <==
	I0907 00:19:47.940846       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0907 00:19:47.949081       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0907 00:19:47.949235       1 main.go:148] setting mtu 1500 for CNI 
	I0907 00:19:47.949255       1 main.go:178] kindnetd IP family: "ipv4"
	I0907 00:19:47.949269       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-07T00:19:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0907 00:19:48.173097       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0907 00:19:48.173134       1 controller.go:381] "Waiting for informer caches to sync"
	I0907 00:19:48.173144       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0907 00:19:48.173296       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0907 00:19:52.274291       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0907 00:19:52.274398       1 metrics.go:72] Registering metrics
	I0907 00:19:52.274510       1 controller.go:711] "Syncing nftables rules"
	I0907 00:19:58.154266       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:19:58.154321       1 main.go:301] handling current node
	I0907 00:20:08.154793       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0907 00:20:08.154862       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c38cff0c3281b8444ca0b2dfd923df7bb0db6a01d9ae78c16747f8aebcb0febe] <==
	I0907 00:29:27.465363       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:30:34.287792       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:30:36.823500       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0907 00:30:38.010011       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:31:12.198191       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.25.162"}
	I0907 00:31:32.194867       1 controller.go:667] quota admission added evaluator for: namespaces
	I0907 00:31:32.485026       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.201.10"}
	I0907 00:31:32.505714       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.34.30"}
	I0907 00:31:39.380464       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:31:41.041987       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:32:41.084176       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:32:58.208178       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:33:49.861601       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:34:25.590380       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:34:58.973087       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:35:53.080191       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:36:24.030896       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:36:58.127479       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:37:29.601502       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:38:19.523337       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:38:46.717742       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:39:40.749895       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:40:16.189973       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0907 00:40:36.824048       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0907 00:41:08.613791       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [0710bd2c664ccab6bbd3da99d825a4d296c0b65e7e338b1d65d401f19a44afb7] <==
	I0907 00:20:40.312700       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0907 00:20:40.317895       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0907 00:20:40.317914       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0907 00:20:40.317925       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0907 00:20:40.317946       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0907 00:20:40.317958       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0907 00:20:40.317969       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0907 00:20:40.318877       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0907 00:20:40.318897       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0907 00:20:40.318990       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0907 00:20:40.330521       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0907 00:20:40.330734       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0907 00:20:40.330882       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-258398"
	I0907 00:20:40.330961       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0907 00:20:40.333090       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0907 00:20:40.333393       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0907 00:20:40.333490       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0907 00:20:40.353701       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	E0907 00:31:32.296252       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.314109       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.325536       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.331760       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.335369       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.341820       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0907 00:31:32.345590       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [bfeeacf134e42a28eea9f304b94bb9e9e8dbe00f1e3b5553b3d3370dcbe2853c] <==
	I0907 00:19:55.455541       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0907 00:19:55.455727       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0907 00:19:55.457221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0907 00:19:55.458361       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0907 00:19:55.458606       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0907 00:19:55.462131       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0907 00:19:55.465289       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0907 00:19:55.465298       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0907 00:19:55.465364       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0907 00:19:55.465407       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0907 00:19:55.465417       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0907 00:19:55.465424       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0907 00:19:55.467835       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0907 00:19:55.470084       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0907 00:19:55.471268       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0907 00:19:55.488457       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0907 00:19:55.500675       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0907 00:19:55.505391       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0907 00:19:55.505460       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0907 00:19:55.505480       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0907 00:19:55.505391       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0907 00:19:55.505411       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0907 00:19:55.505937       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0907 00:19:55.508856       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0907 00:19:55.517230       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [37a1efe2c040b1d9b0c64308dcaf79efb3d981d5ea69fdc38ef5642da1edd315] <==
	I0907 00:19:51.541116       1 server_linux.go:53] "Using iptables proxy"
	I0907 00:19:51.749438       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0907 00:19:52.350486       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0907 00:19:52.350555       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0907 00:19:52.350730       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0907 00:19:52.409464       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0907 00:19:52.409523       1 server_linux.go:132] "Using iptables Proxier"
	I0907 00:19:52.423028       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0907 00:19:52.423429       1 server.go:527] "Version info" version="v1.34.0"
	I0907 00:19:52.423482       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:19:52.425248       1 config.go:200] "Starting service config controller"
	I0907 00:19:52.425276       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0907 00:19:52.431928       1 config.go:106] "Starting endpoint slice config controller"
	I0907 00:19:52.432023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0907 00:19:52.432070       1 config.go:403] "Starting serviceCIDR config controller"
	I0907 00:19:52.432116       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0907 00:19:52.433960       1 config.go:309] "Starting node config controller"
	I0907 00:19:52.434061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0907 00:19:52.434094       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0907 00:19:52.525418       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0907 00:19:52.532178       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0907 00:19:52.532479       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c4bf1f0f0877404b2812bace2ceb454913aa5dad2c799c8a0846b63b2ee25ca1] <==
	I0907 00:20:38.005461       1 server_linux.go:53] "Using iptables proxy"
	I0907 00:20:38.125326       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0907 00:20:38.225601       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0907 00:20:38.225640       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0907 00:20:38.225731       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0907 00:20:38.258244       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0907 00:20:38.258366       1 server_linux.go:132] "Using iptables Proxier"
	I0907 00:20:38.263385       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0907 00:20:38.263817       1 server.go:527] "Version info" version="v1.34.0"
	I0907 00:20:38.264340       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:20:38.266068       1 config.go:200] "Starting service config controller"
	I0907 00:20:38.272675       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0907 00:20:38.266468       1 config.go:106] "Starting endpoint slice config controller"
	I0907 00:20:38.272710       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0907 00:20:38.266492       1 config.go:403] "Starting serviceCIDR config controller"
	I0907 00:20:38.272722       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0907 00:20:38.272728       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0907 00:20:38.267195       1 config.go:309] "Starting node config controller"
	I0907 00:20:38.272736       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0907 00:20:38.272741       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0907 00:20:38.373367       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0907 00:20:38.373367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [26716712168ad4de72f836c6536cc652a7b48ec035f18810829a45a8309eb0fc] <==
	I0907 00:20:35.314680       1 serving.go:386] Generated self-signed cert in-memory
	I0907 00:20:37.932613       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0907 00:20:37.934546       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:20:37.955408       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0907 00:20:37.955612       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0907 00:20:37.955676       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0907 00:20:37.955731       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0907 00:20:37.961296       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0907 00:20:37.961394       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0907 00:20:37.961107       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:20:37.962089       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:20:38.056237       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0907 00:20:38.062617       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:20:38.062681       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [b465a21ac08f68c15cd5c2e9eb3b0dbdb157abbea159a6e40a5457376c004ea4] <==
	I0907 00:19:50.208779       1 serving.go:386] Generated self-signed cert in-memory
	W0907 00:19:52.041250       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0907 00:19:52.041372       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0907 00:19:52.041408       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0907 00:19:52.041458       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0907 00:19:52.152553       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0907 00:19:52.152611       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:19:52.183767       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0907 00:19:52.183913       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:19:52.184418       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:19:52.183934       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0907 00:19:52.193199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found, role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found]" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0907 00:19:52.257317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0907 00:19:52.257509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0907 00:19:52.257624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0907 00:19:52.257985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0907 00:19:52.258137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I0907 00:19:53.785228       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:20:16.762344       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0907 00:20:16.762514       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0907 00:20:16.762538       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0907 00:20:16.762579       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:20:16.763202       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0907 00:20:16.763234       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 07 00:40:41 functional-258398 kubelet[4454]: E0907 00:40:41.424323    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-rsgjm" podUID="694a65d3-ecd4-45a3-8278-a08f57038389"
	Sep 07 00:40:42 functional-258398 kubelet[4454]: E0907 00:40:42.810131    4454 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757205642809890228 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199131} inodes_used:{value:104}}"
	Sep 07 00:40:42 functional-258398 kubelet[4454]: E0907 00:40:42.810164    4454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757205642809890228 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199131} inodes_used:{value:104}}"
	Sep 07 00:40:45 functional-258398 kubelet[4454]: E0907 00:40:45.423749    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="576dde8e-bfd0-4d9b-b3ab-d39c37eba2bc"
	Sep 07 00:40:45 functional-258398 kubelet[4454]: E0907 00:40:45.424607    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-chzl2" podUID="28192880-98cc-48b8-b20e-a7f5e9735e74"
	Sep 07 00:40:47 functional-258398 kubelet[4454]: E0907 00:40:47.424534    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-p29gr" podUID="f818d25d-5bb5-4279-ab52-48681998cbe4"
	Sep 07 00:40:49 functional-258398 kubelet[4454]: E0907 00:40:49.425346    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="523eee7f-cb23-4b14-80b1-ef07ee3a6991"
	Sep 07 00:40:52 functional-258398 kubelet[4454]: E0907 00:40:52.425742    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2gcdg" podUID="a69c65ea-3a09-4546-b6cc-212d2e25a6df"
	Sep 07 00:40:52 functional-258398 kubelet[4454]: E0907 00:40:52.811918    4454 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757205652811678796 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199131} inodes_used:{value:104}}"
	Sep 07 00:40:52 functional-258398 kubelet[4454]: E0907 00:40:52.811956    4454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757205652811678796 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199131} inodes_used:{value:104}}"
	Sep 07 00:40:53 functional-258398 kubelet[4454]: E0907 00:40:53.424407    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-rsgjm" podUID="694a65d3-ecd4-45a3-8278-a08f57038389"
	Sep 07 00:40:57 functional-258398 kubelet[4454]: E0907 00:40:57.425441    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-chzl2" podUID="28192880-98cc-48b8-b20e-a7f5e9735e74"
	Sep 07 00:40:59 functional-258398 kubelet[4454]: E0907 00:40:59.424172    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="576dde8e-bfd0-4d9b-b3ab-d39c37eba2bc"
	Sep 07 00:40:59 functional-258398 kubelet[4454]: E0907 00:40:59.424213    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-p29gr" podUID="f818d25d-5bb5-4279-ab52-48681998cbe4"
	Sep 07 00:41:00 functional-258398 kubelet[4454]: E0907 00:41:00.426570    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="523eee7f-cb23-4b14-80b1-ef07ee3a6991"
	Sep 07 00:41:02 functional-258398 kubelet[4454]: E0907 00:41:02.814215    4454 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757205662813963757 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199131} inodes_used:{value:104}}"
	Sep 07 00:41:02 functional-258398 kubelet[4454]: E0907 00:41:02.814248    4454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757205662813963757 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199131} inodes_used:{value:104}}"
	Sep 07 00:41:05 functional-258398 kubelet[4454]: E0907 00:41:05.425213    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2gcdg" podUID="a69c65ea-3a09-4546-b6cc-212d2e25a6df"
	Sep 07 00:41:07 functional-258398 kubelet[4454]: E0907 00:41:07.424264    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-rsgjm" podUID="694a65d3-ecd4-45a3-8278-a08f57038389"
	Sep 07 00:41:11 functional-258398 kubelet[4454]: E0907 00:41:11.424132    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-p29gr" podUID="f818d25d-5bb5-4279-ab52-48681998cbe4"
	Sep 07 00:41:12 functional-258398 kubelet[4454]: E0907 00:41:12.426439    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-chzl2" podUID="28192880-98cc-48b8-b20e-a7f5e9735e74"
	Sep 07 00:41:12 functional-258398 kubelet[4454]: E0907 00:41:12.816154    4454 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757205672815889104 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199131} inodes_used:{value:104}}"
	Sep 07 00:41:12 functional-258398 kubelet[4454]: E0907 00:41:12.816193    4454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757205672815889104 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199131} inodes_used:{value:104}}"
	Sep 07 00:41:13 functional-258398 kubelet[4454]: E0907 00:41:13.424360    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="576dde8e-bfd0-4d9b-b3ab-d39c37eba2bc"
	Sep 07 00:41:13 functional-258398 kubelet[4454]: E0907 00:41:13.425648    4454 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="523eee7f-cb23-4b14-80b1-ef07ee3a6991"
	
	
	==> storage-provisioner [111784a401976f2a5a986423114642f4743638d18cd4a583a585524ae113d149] <==
	W0907 00:40:50.853121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:40:52.856094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:40:52.860578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:40:54.863754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:40:54.868214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:40:56.871265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:40:56.878256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:40:58.881386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:40:58.888154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:00.891843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:00.896686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:02.900301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:02.907477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:04.910667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:04.915144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:06.918589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:06.923259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:08.926414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:08.931039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:10.933772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:10.938431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:12.942012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:12.947043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:14.950917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:41:14.956416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e583e4d6e6fa321b536b9585b078a2495f4006697027b65b59a97bc290776685] <==
	I0907 00:19:48.412721       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0907 00:19:52.282855       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0907 00:19:52.282915       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0907 00:19:52.336994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:19:55.792072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:00.058055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:03.656923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:06.710136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:09.732740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:09.740805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0907 00:20:09.741329       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0907 00:20:09.741766       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc25c257-b99c-4b2e-a079-2ab17a965bfd", APIVersion:"v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-258398_f27cd163-c8b9-4921-b22b-4f71810f255b became leader
	I0907 00:20:09.742047       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-258398_f27cd163-c8b9-4921-b22b-4f71810f255b!
	W0907 00:20:09.757145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:09.759989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0907 00:20:09.843963       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-258398_f27cd163-c8b9-4921-b22b-4f71810f255b!
	W0907 00:20:11.763392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:11.768036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:13.812948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:13.823465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:15.827850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0907 00:20:15.836477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-258398 -n functional-258398
helpers_test.go:269: (dbg) Run:  kubectl --context functional-258398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-p29gr hello-node-connect-7d85dfc575-rsgjm nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2gcdg kubernetes-dashboard-855c9754f9-chzl2
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-258398 describe pod busybox-mount hello-node-75c85bcc94-p29gr hello-node-connect-7d85dfc575-rsgjm nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2gcdg kubernetes-dashboard-855c9754f9-chzl2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-258398 describe pod busybox-mount hello-node-75c85bcc94-p29gr hello-node-connect-7d85dfc575-rsgjm nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2gcdg kubernetes-dashboard-855c9754f9-chzl2: exit status 1 (132.664539ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:31:19 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://6b423e44879712682b01af3a82faf05d30be5d1c7f309a2ef0b8bb741c3bd2c2
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Sep 2025 00:31:23 +0000
	      Finished:     Sun, 07 Sep 2025 00:31:23 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kfh4b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kfh4b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m55s  default-scheduler  Successfully assigned default/busybox-mount to functional-258398
	  Normal  Pulling    9m55s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m52s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.449s (3.449s including waiting). Image size: 3774172 bytes.
	  Normal  Created    9m52s  kubelet            Created container: mount-munger
	  Normal  Started    9m51s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-p29gr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:21:06 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sdvdd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sdvdd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  20m                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-p29gr to functional-258398
	  Normal   Pulling    15m (x5 over 20m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     15m (x5 over 20m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     15m (x5 over 20m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x72 over 20m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4s (x72 over 20m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-rsgjm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:31:12 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-989l6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-989l6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rsgjm to functional-258398
	  Normal   Pulling    5m18s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     4m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     4m12s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m41s (x16 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    93s (x21 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:21:11 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-glkmv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-glkmv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  20m                   default-scheduler  Successfully assigned default/nginx-svc to functional-258398
	  Warning  Failed     19m                   kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     17m (x2 over 18m)     kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    14m (x5 over 20m)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     14m (x5 over 19m)     kubelet            Error: ErrImagePull
	  Warning  Failed     14m (x2 over 16m)     kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m52s (x49 over 19m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x67 over 19m)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:27:12 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cvc2w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-cvc2w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/sp-pod to functional-258398
	  Warning  Failed     11m                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    8m46s (x5 over 14m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m42s (x5 over 13m)   kubelet            Error: ErrImagePull
	  Warning  Failed     6m26s (x16 over 13m)  kubelet            Error: ImagePullBackOff
	  Warning  Failed     3m12s (x5 over 13m)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m57s (x23 over 13m)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-2gcdg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-chzl2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-258398 describe pod busybox-mount hello-node-75c85bcc94-p29gr hello-node-connect-7d85dfc575-rsgjm nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2gcdg kubernetes-dashboard-855c9754f9-chzl2: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.68s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (248.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [95c02c7d-8d4d-4843-8987-b563f651fa67] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003438657s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-258398 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-258398 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-258398 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-258398 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [576dde8e-bfd0-4d9b-b3ab-d39c37eba2bc] Pending
helpers_test.go:352: "sp-pod" [576dde8e-bfd0-4d9b-b3ab-d39c37eba2bc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0907 00:27:59.742740  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-258398 -n functional-258398
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-07 00:31:13.215534556 +0000 UTC m=+1290.646395070
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-258398 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-258398 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-258398/192.168.49.2
Start Time:       Sun, 07 Sep 2025 00:27:12 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cvc2w (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-cvc2w:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/sp-pod to functional-258398
Warning  Failed     104s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    65s (x5 over 3m28s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     65s (x5 over 3m28s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    50s (x4 over 4m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     9s (x3 over 3m28s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     9s (x4 over 3m28s)   kubelet            Error: ErrImagePull
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-258398 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-258398 logs sp-pod -n default: exit status 1 (82.164712ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-258398 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-258398
helpers_test.go:243: (dbg) docker inspect functional-258398:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515",
	        "Created": "2025-09-07T00:18:23.454660871Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315085,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-07T00:18:23.512135033Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/hosts",
	        "LogPath": "/var/lib/docker/containers/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515/5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515-json.log",
	        "Name": "/functional-258398",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-258398:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-258398",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5b933cc290a918006bf0f6a53327694b64fbbc400e22b50b368cd3dc86799515",
	                "LowerDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8-init/diff:/var/lib/docker/overlay2/5a4b8b8cbe09f4c7d8197d949f1b03b5a8d427ad9c5a27d0359fd04ab981afab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/031f9ddc651ea253c54f4838f7ebcabe342f47639d19b61812d36a9a5a3340c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-258398",
	                "Source": "/var/lib/docker/volumes/functional-258398/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-258398",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-258398",
	                "name.minikube.sigs.k8s.io": "functional-258398",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "70d1c9b2d70f562f4d6c7d385513bf5d0eaaacc02e705345847c2caf08d1de2c",
	            "SandboxKey": "/var/run/docker/netns/70d1c9b2d70f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-258398": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:fb:08:fa:de:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec956df0ad56bd722eaaaf7f53b6bca29823820d69caf85d31f449a0641cba5a",
	                    "EndpointID": "6323a4eda35746f28e310761f04758eede91222887d1875dbe5ab9375cae0ecb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-258398",
	                        "5b933cc290a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-258398 -n functional-258398
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 logs -n 25: (1.753895647s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-258398 image ls                                                                                                                                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ ssh     │ functional-258398 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ image   │ functional-258398 image load --daemon kicbase/echo-server:functional-258398 --alsologtostderr                                                             │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ ssh     │ functional-258398 ssh sudo cat /etc/test/nested/copy/296249/hosts                                                                                         │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ image   │ functional-258398 image ls                                                                                                                                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ image   │ functional-258398 image load --daemon kicbase/echo-server:functional-258398 --alsologtostderr                                                             │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ image   │ functional-258398 image ls                                                                                                                                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ image   │ functional-258398 image save kicbase/echo-server:functional-258398 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ image   │ functional-258398 image rm kicbase/echo-server:functional-258398 --alsologtostderr                                                                        │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ image   │ functional-258398 image ls                                                                                                                                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ image   │ functional-258398 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ image   │ functional-258398 image ls                                                                                                                                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ image   │ functional-258398 image save --daemon kicbase/echo-server:functional-258398 --alsologtostderr                                                             │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ ssh     │ functional-258398 ssh echo hello                                                                                                                          │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ ssh     │ functional-258398 ssh cat /etc/hostname                                                                                                                   │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │ 07 Sep 25 00:21 UTC │
	│ tunnel  │ functional-258398 tunnel --alsologtostderr                                                                                                                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │                     │
	│ tunnel  │ functional-258398 tunnel --alsologtostderr                                                                                                                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │                     │
	│ tunnel  │ functional-258398 tunnel --alsologtostderr                                                                                                                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:21 UTC │                     │
	│ service │ functional-258398 service list                                                                                                                            │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ service │ functional-258398 service list -o json                                                                                                                    │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ service │ functional-258398 service --namespace=default --https --url hello-node                                                                                    │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ service │ functional-258398 service hello-node --url --format={{.IP}}                                                                                               │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ service │ functional-258398 service hello-node --url                                                                                                                │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │                     │
	│ addons  │ functional-258398 addons list                                                                                                                             │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	│ addons  │ functional-258398 addons list -o json                                                                                                                     │ functional-258398 │ jenkins │ v1.36.0 │ 07 Sep 25 00:31 UTC │ 07 Sep 25 00:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/07 00:20:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:20:15.264764  319875 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:20:15.264914  319875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:20:15.264918  319875 out.go:374] Setting ErrFile to fd 2...
	I0907 00:20:15.264922  319875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:20:15.265176  319875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 00:20:15.265553  319875 out.go:368] Setting JSON to false
	I0907 00:20:15.266464  319875 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7365,"bootTime":1757197051,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0907 00:20:15.266528  319875 start.go:140] virtualization:  
	I0907 00:20:15.270111  319875 out.go:179] * [functional-258398] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0907 00:20:15.273275  319875 notify.go:220] Checking for updates...
	I0907 00:20:15.276800  319875 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 00:20:15.279787  319875 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:20:15.282665  319875 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 00:20:15.285461  319875 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	I0907 00:20:15.288394  319875 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0907 00:20:15.291352  319875 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:20:15.294764  319875 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:20:15.294866  319875 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:20:15.317101  319875 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0907 00:20:15.317198  319875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:20:15.374613  319875 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-07 00:20:15.364734775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:20:15.374704  319875 docker.go:318] overlay module found
	I0907 00:20:15.377925  319875 out.go:179] * Using the docker driver based on existing profile
	I0907 00:20:15.381001  319875 start.go:304] selected driver: docker
	I0907 00:20:15.381022  319875 start.go:918] validating driver "docker" against &{Name:functional-258398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-258398 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:20:15.381138  319875 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:20:15.381244  319875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:20:15.438699  319875 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-07 00:20:15.428972697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:20:15.439230  319875 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:20:15.439251  319875 cni.go:84] Creating CNI manager for ""
	I0907 00:20:15.439310  319875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0907 00:20:15.439361  319875 start.go:348] cluster config:
	{Name:functional-258398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-258398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:20:15.444258  319875 out.go:179] * Starting "functional-258398" primary control-plane node in "functional-258398" cluster
	I0907 00:20:15.447080  319875 cache.go:123] Beginning downloading kic base image for docker with crio
	I0907 00:20:15.450045  319875 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0907 00:20:15.453005  319875 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:20:15.453069  319875 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0907 00:20:15.453076  319875 cache.go:58] Caching tarball of preloaded images
	I0907 00:20:15.453110  319875 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0907 00:20:15.453181  319875 preload.go:172] Found /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0907 00:20:15.453189  319875 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0907 00:20:15.453310  319875 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/config.json ...
	I0907 00:20:15.475552  319875 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0907 00:20:15.475564  319875 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0907 00:20:15.475575  319875 cache.go:232] Successfully downloaded all kic artifacts
	I0907 00:20:15.475597  319875 start.go:360] acquireMachinesLock for functional-258398: {Name:mke6aae2e9f8dd67fc8f9d093dc880e9b359b888 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:20:15.475650  319875 start.go:364] duration metric: took 37.547µs to acquireMachinesLock for "functional-258398"
	I0907 00:20:15.475668  319875 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:20:15.475673  319875 fix.go:54] fixHost starting: 
	I0907 00:20:15.475935  319875 cli_runner.go:164] Run: docker container inspect functional-258398 --format={{.State.Status}}
	I0907 00:20:15.493500  319875 fix.go:112] recreateIfNeeded on functional-258398: state=Running err=<nil>
	W0907 00:20:15.493520  319875 fix.go:138] unexpected machine state, will restart: <nil>
	I0907 00:20:15.496876  319875 out.go:252] * Updating the running docker "functional-258398" container ...
	I0907 00:20:15.496903  319875 machine.go:93] provisionDockerMachine start ...
	I0907 00:20:15.496979  319875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
	I0907 00:20:15.524162  319875 main.go:141] libmachine: Using SSH client type: native
	I0907 00:20:15.524492  319875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0907 00:20:15.524499  319875 main.go:141] libmachine: About to run SSH command:
	hostname
	I0907 00:20:15.648515  319875 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-258398
	
	I0907 00:20:15.648529  319875 ubuntu.go:182] provisioning hostname "functional-258398"
	I0907 00:20:15.648611  319875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
	I0907 00:20:15.669853  319875 main.go:141] libmachine: Using SSH client type: native
	I0907 00:20:15.670159  319875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0907 00:20:15.670168  319875 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-258398 && echo "functional-258398" | sudo tee /etc/hostname
	I0907 00:20:15.809996  319875 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-258398
	
	I0907 00:20:15.810065  319875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
	I0907 00:20:15.831045  319875 main.go:141] libmachine: Using SSH client type: native
	I0907 00:20:15.831358  319875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0907 00:20:15.831372  319875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-258398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-258398/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-258398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:20:15.965186  319875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:20:15.965201  319875 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21132-294391/.minikube CaCertPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21132-294391/.minikube}
	I0907 00:20:15.965220  319875 ubuntu.go:190] setting up certificates
	I0907 00:20:15.965229  319875 provision.go:84] configureAuth start
	I0907 00:20:15.965290  319875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-258398
	I0907 00:20:15.982685  319875 provision.go:143] copyHostCerts
	I0907 00:20:15.982743  319875 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-294391/.minikube/ca.pem, removing ...
	I0907 00:20:15.982764  319875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-294391/.minikube/ca.pem
	I0907 00:20:15.982842  319875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21132-294391/.minikube/ca.pem (1082 bytes)
	I0907 00:20:15.982952  319875 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-294391/.minikube/cert.pem, removing ...
	I0907 00:20:15.982957  319875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-294391/.minikube/cert.pem
	I0907 00:20:15.983031  319875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21132-294391/.minikube/cert.pem (1123 bytes)
	I0907 00:20:15.983100  319875 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-294391/.minikube/key.pem, removing ...
	I0907 00:20:15.983105  319875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-294391/.minikube/key.pem
	I0907 00:20:15.983129  319875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21132-294391/.minikube/key.pem (1675 bytes)
	I0907 00:20:15.983184  319875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21132-294391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca-key.pem org=jenkins.functional-258398 san=[127.0.0.1 192.168.49.2 functional-258398 localhost minikube]
	I0907 00:20:16.401473  319875 provision.go:177] copyRemoteCerts
	I0907 00:20:16.401530  319875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:20:16.401576  319875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
	I0907 00:20:16.420637  319875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
	I0907 00:20:16.520117  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:20:16.545167  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0907 00:20:16.570292  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:20:16.594969  319875 provision.go:87] duration metric: took 629.71674ms to configureAuth
	I0907 00:20:16.594988  319875 ubuntu.go:206] setting minikube options for container-runtime
	I0907 00:20:16.595193  319875 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:20:16.595302  319875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
	I0907 00:20:16.612507  319875 main.go:141] libmachine: Using SSH client type: native
	I0907 00:20:16.612842  319875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0907 00:20:16.612854  319875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:20:22.025246  319875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:20:22.025259  319875 machine.go:96] duration metric: took 6.528349697s to provisionDockerMachine
	I0907 00:20:22.025268  319875 start.go:293] postStartSetup for "functional-258398" (driver="docker")
	I0907 00:20:22.025279  319875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:20:22.025362  319875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:20:22.025408  319875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
	I0907 00:20:22.046641  319875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
	I0907 00:20:22.138376  319875 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:20:22.141681  319875 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0907 00:20:22.141712  319875 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0907 00:20:22.141721  319875 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0907 00:20:22.141727  319875 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0907 00:20:22.141737  319875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-294391/.minikube/addons for local assets ...
	I0907 00:20:22.141798  319875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-294391/.minikube/files for local assets ...
	I0907 00:20:22.141891  319875 filesync.go:149] local asset: /home/jenkins/minikube-integration/21132-294391/.minikube/files/etc/ssl/certs/2962492.pem -> 2962492.pem in /etc/ssl/certs
	I0907 00:20:22.141969  319875 filesync.go:149] local asset: /home/jenkins/minikube-integration/21132-294391/.minikube/files/etc/test/nested/copy/296249/hosts -> hosts in /etc/test/nested/copy/296249
	I0907 00:20:22.142023  319875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296249
	I0907 00:20:22.150824  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/files/etc/ssl/certs/2962492.pem --> /etc/ssl/certs/2962492.pem (1708 bytes)
	I0907 00:20:22.175784  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/files/etc/test/nested/copy/296249/hosts --> /etc/test/nested/copy/296249/hosts (40 bytes)
	I0907 00:20:22.201680  319875 start.go:296] duration metric: took 176.396617ms for postStartSetup
	I0907 00:20:22.201773  319875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:20:22.201817  319875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
	I0907 00:20:22.220383  319875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
	I0907 00:20:22.310219  319875 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0907 00:20:22.315379  319875 fix.go:56] duration metric: took 6.839697744s for fixHost
	I0907 00:20:22.315395  319875 start.go:83] releasing machines lock for "functional-258398", held for 6.839738105s
	I0907 00:20:22.315477  319875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-258398
	I0907 00:20:22.332522  319875 ssh_runner.go:195] Run: cat /version.json
	I0907 00:20:22.332562  319875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:20:22.332567  319875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
	I0907 00:20:22.332613  319875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
	I0907 00:20:22.359382  319875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
	I0907 00:20:22.360027  319875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
	I0907 00:20:22.570429  319875 ssh_runner.go:195] Run: systemctl --version
	I0907 00:20:22.574954  319875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:20:22.718642  319875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0907 00:20:22.723391  319875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:20:22.732124  319875 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0907 00:20:22.732196  319875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:20:22.742377  319875 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0907 00:20:22.742392  319875 start.go:495] detecting cgroup driver to use...
	I0907 00:20:22.742425  319875 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0907 00:20:22.742472  319875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:20:22.755478  319875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:20:22.767626  319875 docker.go:218] disabling cri-docker service (if available) ...
	I0907 00:20:22.767682  319875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:20:22.781198  319875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:20:22.793615  319875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:20:22.923162  319875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:20:23.062334  319875 docker.go:234] disabling docker service ...
	I0907 00:20:23.062390  319875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:20:23.076146  319875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:20:23.088256  319875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:20:23.221007  319875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:20:23.350874  319875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:20:23.363847  319875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:20:23.382094  319875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0907 00:20:23.382151  319875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:20:23.392200  319875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:20:23.392270  319875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:20:23.402817  319875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:20:23.413152  319875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:20:23.423537  319875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:20:23.433136  319875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:20:23.443352  319875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:20:23.453155  319875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:20:23.462987  319875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:20:23.471663  319875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:20:23.480335  319875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:20:23.607029  319875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:20:28.395643  319875 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.788585367s)
	I0907 00:20:28.395662  319875 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:20:28.395725  319875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:20:28.399438  319875 start.go:563] Will wait 60s for crictl version
	I0907 00:20:28.399500  319875 ssh_runner.go:195] Run: which crictl
	I0907 00:20:28.402899  319875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:20:28.441248  319875 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0907 00:20:28.441324  319875 ssh_runner.go:195] Run: crio --version
	I0907 00:20:28.479682  319875 ssh_runner.go:195] Run: crio --version
	I0907 00:20:28.524897  319875 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0907 00:20:28.527817  319875 cli_runner.go:164] Run: docker network inspect functional-258398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0907 00:20:28.544345  319875 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0907 00:20:28.551269  319875 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0907 00:20:28.554219  319875 kubeadm.go:875] updating cluster {Name:functional-258398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-258398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0907 00:20:28.554340  319875 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:20:28.554417  319875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:20:28.597927  319875 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 00:20:28.597938  319875 crio.go:433] Images already preloaded, skipping extraction
	I0907 00:20:28.597994  319875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:20:28.634387  319875 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 00:20:28.634399  319875 cache_images.go:85] Images are preloaded, skipping loading
	I0907 00:20:28.634406  319875 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0907 00:20:28.634509  319875 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-258398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-258398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0907 00:20:28.634588  319875 ssh_runner.go:195] Run: crio config
	I0907 00:20:28.683468  319875 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0907 00:20:28.683488  319875 cni.go:84] Creating CNI manager for ""
	I0907 00:20:28.683497  319875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0907 00:20:28.683505  319875 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0907 00:20:28.683531  319875 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-258398 NodeName:functional-258398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:20:28.683647  319875 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-258398"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:20:28.683714  319875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0907 00:20:28.692883  319875 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:20:28.692946  319875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:20:28.702005  319875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0907 00:20:28.720378  319875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:20:28.738297  319875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0907 00:20:28.756198  319875 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0907 00:20:28.759996  319875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:20:28.892701  319875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0907 00:20:28.905686  319875 certs.go:68] Setting up /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398 for IP: 192.168.49.2
	I0907 00:20:28.905697  319875 certs.go:194] generating shared ca certs ...
	I0907 00:20:28.905710  319875 certs.go:226] acquiring lock for ca certs: {Name:mkf2f86d550791cd126f7b3aeff6c351ed5c0816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:20:28.905858  319875 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21132-294391/.minikube/ca.key
	I0907 00:20:28.905926  319875 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.key
	I0907 00:20:28.905933  319875 certs.go:256] generating profile certs ...
	I0907 00:20:28.906024  319875 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.key
	I0907 00:20:28.906071  319875 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/apiserver.key.7386c67b
	I0907 00:20:28.906114  319875 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/proxy-client.key
	I0907 00:20:28.906235  319875 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/296249.pem (1338 bytes)
	W0907 00:20:28.906261  319875 certs.go:480] ignoring /home/jenkins/minikube-integration/21132-294391/.minikube/certs/296249_empty.pem, impossibly tiny 0 bytes
	I0907 00:20:28.906267  319875 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:20:28.906289  319875 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:20:28.906312  319875 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:20:28.906331  319875 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/key.pem (1675 bytes)
	I0907 00:20:28.906373  319875 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/files/etc/ssl/certs/2962492.pem (1708 bytes)
	I0907 00:20:28.906951  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:20:28.932099  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0907 00:20:28.957722  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:20:28.983029  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:20:29.008574  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0907 00:20:29.033988  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0907 00:20:29.059216  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:20:29.084042  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:20:29.108452  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/files/etc/ssl/certs/2962492.pem --> /usr/share/ca-certificates/2962492.pem (1708 bytes)
	I0907 00:20:29.132337  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:20:29.156123  319875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/certs/296249.pem --> /usr/share/ca-certificates/296249.pem (1338 bytes)
	I0907 00:20:29.179998  319875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:20:29.198281  319875 ssh_runner.go:195] Run: openssl version
	I0907 00:20:29.203729  319875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296249.pem && ln -fs /usr/share/ca-certificates/296249.pem /etc/ssl/certs/296249.pem"
	I0907 00:20:29.213487  319875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296249.pem
	I0907 00:20:29.217437  319875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  7 00:18 /usr/share/ca-certificates/296249.pem
	I0907 00:20:29.217506  319875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296249.pem
	I0907 00:20:29.224988  319875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296249.pem /etc/ssl/certs/51391683.0"
	I0907 00:20:29.234810  319875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2962492.pem && ln -fs /usr/share/ca-certificates/2962492.pem /etc/ssl/certs/2962492.pem"
	I0907 00:20:29.244553  319875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2962492.pem
	I0907 00:20:29.247994  319875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  7 00:18 /usr/share/ca-certificates/2962492.pem
	I0907 00:20:29.248052  319875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2962492.pem
	I0907 00:20:29.254931  319875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2962492.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:20:29.264177  319875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:20:29.273852  319875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:20:29.277170  319875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  7 00:10 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:20:29.277224  319875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:20:29.284221  319875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:20:29.293403  319875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0907 00:20:29.296895  319875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:20:29.303714  319875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:20:29.310896  319875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:20:29.317934  319875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:20:29.324877  319875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:20:29.331828  319875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:20:29.339085  319875 kubeadm.go:392] StartCluster: {Name:functional-258398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-258398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:20:29.339174  319875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:20:29.339238  319875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:20:29.378603  319875 cri.go:89] found id: "997b2d26cc8578cb93e90c9a02727c48df63d896c893ff7a9cf8f5328848caaa"
	I0907 00:20:29.378619  319875 cri.go:89] found id: "37a1efe2c040b1d9b0c64308dcaf79efb3d981d5ea69fdc38ef5642da1edd315"
	I0907 00:20:29.378623  319875 cri.go:89] found id: "bfeeacf134e42a28eea9f304b94bb9e9e8dbe00f1e3b5553b3d3370dcbe2853c"
	I0907 00:20:29.378626  319875 cri.go:89] found id: "b443422dc44b238ac8bf8aebfd292ce8217f9df9c0a84c02e9aa8b2c94165428"
	I0907 00:20:29.378628  319875 cri.go:89] found id: "01c65c16fb373bb46c344b6a913c4d75f29c61f0836c1cbd4ac158c09a5cbd6e"
	I0907 00:20:29.378630  319875 cri.go:89] found id: "b465a21ac08f68c15cd5c2e9eb3b0dbdb157abbea159a6e40a5457376c004ea4"
	I0907 00:20:29.378633  319875 cri.go:89] found id: "e583e4d6e6fa321b536b9585b078a2495f4006697027b65b59a97bc290776685"
	I0907 00:20:29.378635  319875 cri.go:89] found id: "d6af83dec4758df3d40a670815100c1ff162fda2953d0e2c204b8645a7a471b3"
	I0907 00:20:29.378637  319875 cri.go:89] found id: "95905be79892087da1bb0ab06cc60479acf57238bdb82161989823b15e15d6bd"
	I0907 00:20:29.378642  319875 cri.go:89] found id: "5342f91c621467b0695272f3ca1c2ddb957f402a5af6862817bca3f7940970b8"
	I0907 00:20:29.378645  319875 cri.go:89] found id: "766c680e86fb6d46c14bc9526959b116bae469faad7802ce8a4f8f6d4ed52c33"
	I0907 00:20:29.378648  319875 cri.go:89] found id: "2c5360d59db03a132f992cd9cb2cf7c657f80bec9b6b81359b6fb76238670425"
	I0907 00:20:29.378650  319875 cri.go:89] found id: "ff599508dad127eb80db3c6e752fa65e7476e4ded9922baa4a4e9340776962e2"
	I0907 00:20:29.378652  319875 cri.go:89] found id: "9085209920ee90e00d44a2b1b46172b706f6d00ca23479b7f240cce8ab9e58e9"
	I0907 00:20:29.378654  319875 cri.go:89] found id: "b4503a6e593fbbb47c350044b03020c97b619c4977e19b38debc37b38f59b9c2"
	I0907 00:20:29.378661  319875 cri.go:89] found id: "bfdcf1177dafd4852c314fda7be23c4353cea56e715fdc16ddd75c96ecbe5087"
	I0907 00:20:29.378663  319875 cri.go:89] found id: ""
	I0907 00:20:29.378713  319875 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-258398 -n functional-258398
helpers_test.go:269: (dbg) Run:  kubectl --context functional-258398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-p29gr hello-node-connect-7d85dfc575-rsgjm nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-258398 describe pod hello-node-75c85bcc94-p29gr hello-node-connect-7d85dfc575-rsgjm nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-258398 describe pod hello-node-75c85bcc94-p29gr hello-node-connect-7d85dfc575-rsgjm nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-p29gr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:21:06 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sdvdd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sdvdd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-p29gr to functional-258398
	  Normal   Pulling    5m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     5m57s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     5m57s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m43s (x16 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m36s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-rsgjm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:31:12 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-989l6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-989l6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  3s    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rsgjm to functional-258398
	  Normal   Pulling    4s    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     4s    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     4s    kubelet            Error: ErrImagePull
	  Normal   BackOff    4s    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4s    kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:21:11 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-glkmv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-glkmv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/nginx-svc to functional-258398
	  Warning  Failed     9m4s                   kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m23s (x2 over 8m19s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m32s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4m2s (x5 over 9m4s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m2s (x2 over 6m4s)    kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m51s (x16 over 9m3s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    100s (x21 over 9m3s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258398/192.168.49.2
	Start Time:       Sun, 07 Sep 2025 00:27:12 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cvc2w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-cvc2w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-258398
	  Warning  Failed     107s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    68s (x5 over 3m31s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     68s (x5 over 3m31s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    53s (x4 over 4m3s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12s (x3 over 3m31s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12s (x4 over 3m31s)  kubelet            Error: ErrImagePull

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (248.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-258398 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-258398 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-p29gr" [f818d25d-5bb5-4279-ab52-48681998cbe4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-258398 -n functional-258398
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-07 00:31:06.780532683 +0000 UTC m=+1284.211393189
functional_test.go:1460: (dbg) Run:  kubectl --context functional-258398 describe po hello-node-75c85bcc94-p29gr -n default
functional_test.go:1460: (dbg) kubectl --context functional-258398 describe po hello-node-75c85bcc94-p29gr -n default:
Name:             hello-node-75c85bcc94-p29gr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-258398/192.168.49.2
Start Time:       Sun, 07 Sep 2025 00:21:06 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sdvdd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sdvdd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-p29gr to functional-258398
Normal   Pulling    5m47s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     5m47s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     5m47s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m33s (x16 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m26s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-258398 logs hello-node-75c85bcc94-p29gr -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-258398 logs hello-node-75c85bcc94-p29gr -n default: exit status 1 (98.570207ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-p29gr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-258398 logs hello-node-75c85bcc94-p29gr -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-258398 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [523eee7f-cb23-4b14-80b1-ef07ee3a6991] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0907 00:22:59.743653  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:23:27.458053  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-258398 -n functional-258398
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-07 00:25:12.161677029 +0000 UTC m=+929.592537543
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-258398 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-258398 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-258398/192.168.49.2
Start Time:       Sun, 07 Sep 2025 00:21:11 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-glkmv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-glkmv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-258398
Warning  Failed     3m                   kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     79s (x3 over 3m)     kubelet            Error: ErrImagePull
Warning  Failed     79s (x2 over 2m15s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    43s (x5 over 2m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     43s (x5 over 2m59s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    30s (x4 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-258398 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-258398 logs nginx-svc -n default: exit status 1 (108.409035ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-258398 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (114.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0907 00:25:12.360032  296249 retry.go:31] will retry after 2.579179317s: Temporary Error: Get "http:": http: no Host in request URL
I0907 00:25:14.940255  296249 retry.go:31] will retry after 5.971879999s: Temporary Error: Get "http:": http: no Host in request URL
I0907 00:25:20.912569  296249 retry.go:31] will retry after 7.345095105s: Temporary Error: Get "http:": http: no Host in request URL
I0907 00:25:28.257800  296249 retry.go:31] will retry after 13.216943148s: Temporary Error: Get "http:": http: no Host in request URL
I0907 00:25:41.475620  296249 retry.go:31] will retry after 8.431255212s: Temporary Error: Get "http:": http: no Host in request URL
I0907 00:25:49.907073  296249 retry.go:31] will retry after 24.145680553s: Temporary Error: Get "http:": http: no Host in request URL
I0907 00:26:14.053323  296249 retry.go:31] will retry after 21.443787713s: Temporary Error: Get "http:": http: no Host in request URL
I0907 00:26:35.497731  296249 retry.go:31] will retry after 31.677491688s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-258398 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.97.134.38   10.97.134.38   80:32067/TCP   5m56s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (114.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 service --namespace=default --https --url hello-node: exit status 115 (423.658366ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30147
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-258398 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 service hello-node --url --format={{.IP}}: exit status 115 (390.733938ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-258398 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 service hello-node --url: exit status 115 (390.359582ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30147
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-258398 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30147
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (935.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0907 01:27:59.743368  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p calico-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: exit status 80 (15m35.060539969s)

                                                
                                                
-- stdout --
	* [calico-690290] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-690290" primary control-plane node in "calico-690290" cluster
	* Pulling base image v0.0.47-1756980985-21488 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 01:27:39.618788  530612 out.go:360] Setting OutFile to fd 1 ...
	I0907 01:27:39.618979  530612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 01:27:39.619006  530612 out.go:374] Setting ErrFile to fd 2...
	I0907 01:27:39.619024  530612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 01:27:39.619330  530612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 01:27:39.619830  530612 out.go:368] Setting JSON to false
	I0907 01:27:39.620875  530612 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11409,"bootTime":1757197051,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0907 01:27:39.620978  530612 start.go:140] virtualization:  
	I0907 01:27:39.625480  530612 out.go:179] * [calico-690290] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0907 01:27:39.629024  530612 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 01:27:39.629110  530612 notify.go:220] Checking for updates...
	I0907 01:27:39.635875  530612 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 01:27:39.639352  530612 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 01:27:39.642467  530612 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	I0907 01:27:39.645641  530612 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0907 01:27:39.648711  530612 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 01:27:39.652345  530612 config.go:182] Loaded profile config "kindnet-690290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 01:27:39.652486  530612 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 01:27:39.681953  530612 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0907 01:27:39.682079  530612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 01:27:39.739877  530612 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-07 01:27:39.730369961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 01:27:39.739989  530612 docker.go:318] overlay module found
	I0907 01:27:39.743456  530612 out.go:179] * Using the docker driver based on user configuration
	I0907 01:27:39.746383  530612 start.go:304] selected driver: docker
	I0907 01:27:39.746407  530612 start.go:918] validating driver "docker" against <nil>
	I0907 01:27:39.746423  530612 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 01:27:39.747199  530612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 01:27:39.798618  530612 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-07 01:27:39.789095799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 01:27:39.798772  530612 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0907 01:27:39.798997  530612 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 01:27:39.801941  530612 out.go:179] * Using Docker driver with root privileges
	I0907 01:27:39.804949  530612 cni.go:84] Creating CNI manager for "calico"
	I0907 01:27:39.804978  530612 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0907 01:27:39.805063  530612 start.go:348] cluster config:
	{Name:calico-690290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-690290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0907 01:27:39.808540  530612 out.go:179] * Starting "calico-690290" primary control-plane node in "calico-690290" cluster
	I0907 01:27:39.811430  530612 cache.go:123] Beginning downloading kic base image for docker with crio
	I0907 01:27:39.814444  530612 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0907 01:27:39.817285  530612 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 01:27:39.817316  530612 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0907 01:27:39.817356  530612 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0907 01:27:39.817365  530612 cache.go:58] Caching tarball of preloaded images
	I0907 01:27:39.817451  530612 preload.go:172] Found /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0907 01:27:39.817461  530612 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0907 01:27:39.817573  530612 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/config.json ...
	I0907 01:27:39.817597  530612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/config.json: {Name:mk6c4a49accd5ea574f8e5b195719ed28eb4827c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:27:39.835918  530612 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0907 01:27:39.835940  530612 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0907 01:27:39.835957  530612 cache.go:232] Successfully downloaded all kic artifacts
	I0907 01:27:39.835981  530612 start.go:360] acquireMachinesLock for calico-690290: {Name:mk6f5eff4d0c9afd764b3a040dfc937ebac6f701 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 01:27:39.836095  530612 start.go:364] duration metric: took 93.408µs to acquireMachinesLock for "calico-690290"
	I0907 01:27:39.836126  530612 start.go:93] Provisioning new machine with config: &{Name:calico-690290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-690290 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 01:27:39.836202  530612 start.go:125] createHost starting for "" (driver="docker")
	I0907 01:27:39.839593  530612 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0907 01:27:39.839817  530612 start.go:159] libmachine.API.Create for "calico-690290" (driver="docker")
	I0907 01:27:39.839854  530612 client.go:168] LocalClient.Create starting
	I0907 01:27:39.839928  530612 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem
	I0907 01:27:39.839968  530612 main.go:141] libmachine: Decoding PEM data...
	I0907 01:27:39.839987  530612 main.go:141] libmachine: Parsing certificate...
	I0907 01:27:39.840113  530612 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21132-294391/.minikube/certs/cert.pem
	I0907 01:27:39.840142  530612 main.go:141] libmachine: Decoding PEM data...
	I0907 01:27:39.840158  530612 main.go:141] libmachine: Parsing certificate...
	I0907 01:27:39.840557  530612 cli_runner.go:164] Run: docker network inspect calico-690290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0907 01:27:39.855439  530612 cli_runner.go:211] docker network inspect calico-690290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0907 01:27:39.855523  530612 network_create.go:284] running [docker network inspect calico-690290] to gather additional debugging logs...
	I0907 01:27:39.855552  530612 cli_runner.go:164] Run: docker network inspect calico-690290
	W0907 01:27:39.876988  530612 cli_runner.go:211] docker network inspect calico-690290 returned with exit code 1
	I0907 01:27:39.877019  530612 network_create.go:287] error running [docker network inspect calico-690290]: docker network inspect calico-690290: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-690290 not found
	I0907 01:27:39.877033  530612 network_create.go:289] output of [docker network inspect calico-690290]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-690290 not found
	
	** /stderr **
	I0907 01:27:39.877148  530612 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0907 01:27:39.898074  530612 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-94b882556325 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:a1:d5:d3:ef:e7} reservation:<nil>}
	I0907 01:27:39.898510  530612 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2f48d2b0e54 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:6b:4b:54:f6:4b} reservation:<nil>}
	I0907 01:27:39.898776  530612 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f23e5ba5612e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:92:c6:34:d5:d7:b5} reservation:<nil>}
	I0907 01:27:39.899223  530612 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019af590}
	I0907 01:27:39.899248  530612 network_create.go:124] attempt to create docker network calico-690290 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0907 01:27:39.899311  530612 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-690290 calico-690290
	I0907 01:27:39.958717  530612 network_create.go:108] docker network calico-690290 192.168.76.0/24 created
	I0907 01:27:39.958754  530612 kic.go:121] calculated static IP "192.168.76.2" for the "calico-690290" container
	I0907 01:27:39.958841  530612 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0907 01:27:39.980454  530612 cli_runner.go:164] Run: docker volume create calico-690290 --label name.minikube.sigs.k8s.io=calico-690290 --label created_by.minikube.sigs.k8s.io=true
	I0907 01:27:39.998361  530612 oci.go:103] Successfully created a docker volume calico-690290
	I0907 01:27:39.998512  530612 cli_runner.go:164] Run: docker run --rm --name calico-690290-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-690290 --entrypoint /usr/bin/test -v calico-690290:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0907 01:27:40.520338  530612 oci.go:107] Successfully prepared a docker volume calico-690290
	I0907 01:27:40.520390  530612 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 01:27:40.520410  530612 kic.go:194] Starting extracting preloaded images to volume ...
	I0907 01:27:40.520483  530612 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v calico-690290:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0907 01:27:44.916647  530612 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v calico-690290:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.396119803s)
	I0907 01:27:44.916693  530612 kic.go:203] duration metric: took 4.396280028s to extract preloaded images to volume ...
	W0907 01:27:44.916885  530612 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0907 01:27:44.917013  530612 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0907 01:27:44.971734  530612 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-690290 --name calico-690290 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-690290 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-690290 --network calico-690290 --ip 192.168.76.2 --volume calico-690290:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0907 01:27:45.488406  530612 cli_runner.go:164] Run: docker container inspect calico-690290 --format={{.State.Running}}
	I0907 01:27:45.512982  530612 cli_runner.go:164] Run: docker container inspect calico-690290 --format={{.State.Status}}
	I0907 01:27:45.543121  530612 cli_runner.go:164] Run: docker exec calico-690290 stat /var/lib/dpkg/alternatives/iptables
	I0907 01:27:45.623523  530612 oci.go:144] the created container "calico-690290" has a running status.
	I0907 01:27:45.623561  530612 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21132-294391/.minikube/machines/calico-690290/id_rsa...
	I0907 01:27:46.314536  530612 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21132-294391/.minikube/machines/calico-690290/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0907 01:27:46.336047  530612 cli_runner.go:164] Run: docker container inspect calico-690290 --format={{.State.Status}}
	I0907 01:27:46.359057  530612 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0907 01:27:46.359077  530612 kic_runner.go:114] Args: [docker exec --privileged calico-690290 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0907 01:27:46.421594  530612 cli_runner.go:164] Run: docker container inspect calico-690290 --format={{.State.Status}}
	I0907 01:27:46.439569  530612 machine.go:93] provisionDockerMachine start ...
	I0907 01:27:46.439752  530612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-690290
	I0907 01:27:46.463894  530612 main.go:141] libmachine: Using SSH client type: native
	I0907 01:27:46.464237  530612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0907 01:27:46.464246  530612 main.go:141] libmachine: About to run SSH command:
	hostname
	I0907 01:27:46.594582  530612 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-690290
	
	I0907 01:27:46.594612  530612 ubuntu.go:182] provisioning hostname "calico-690290"
	I0907 01:27:46.594673  530612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-690290
	I0907 01:27:46.613949  530612 main.go:141] libmachine: Using SSH client type: native
	I0907 01:27:46.614259  530612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0907 01:27:46.614271  530612 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-690290 && echo "calico-690290" | sudo tee /etc/hostname
	I0907 01:27:46.759256  530612 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-690290
	
	I0907 01:27:46.759443  530612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-690290
	I0907 01:27:46.791013  530612 main.go:141] libmachine: Using SSH client type: native
	I0907 01:27:46.791319  530612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0907 01:27:46.791337  530612 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-690290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-690290/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-690290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 01:27:46.917702  530612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 01:27:46.917725  530612 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21132-294391/.minikube CaCertPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21132-294391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21132-294391/.minikube}
	I0907 01:27:46.917744  530612 ubuntu.go:190] setting up certificates
	I0907 01:27:46.917752  530612 provision.go:84] configureAuth start
	I0907 01:27:46.917811  530612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-690290
	I0907 01:27:46.935794  530612 provision.go:143] copyHostCerts
	I0907 01:27:46.935859  530612 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-294391/.minikube/ca.pem, removing ...
	I0907 01:27:46.935869  530612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-294391/.minikube/ca.pem
	I0907 01:27:46.935948  530612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21132-294391/.minikube/ca.pem (1082 bytes)
	I0907 01:27:46.936040  530612 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-294391/.minikube/cert.pem, removing ...
	I0907 01:27:46.936046  530612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-294391/.minikube/cert.pem
	I0907 01:27:46.936072  530612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21132-294391/.minikube/cert.pem (1123 bytes)
	I0907 01:27:46.936125  530612 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-294391/.minikube/key.pem, removing ...
	I0907 01:27:46.936129  530612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-294391/.minikube/key.pem
	I0907 01:27:46.936153  530612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21132-294391/.minikube/key.pem (1675 bytes)
	I0907 01:27:46.936200  530612 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21132-294391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca-key.pem org=jenkins.calico-690290 san=[127.0.0.1 192.168.76.2 calico-690290 localhost minikube]
	I0907 01:27:47.137618  530612 provision.go:177] copyRemoteCerts
	I0907 01:27:47.137705  530612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 01:27:47.137749  530612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-690290
	I0907 01:27:47.157172  530612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/calico-690290/id_rsa Username:docker}
	I0907 01:27:47.251101  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 01:27:47.279831  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0907 01:27:47.306106  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 01:27:47.332079  530612 provision.go:87] duration metric: took 414.305347ms to configureAuth
	I0907 01:27:47.332109  530612 ubuntu.go:206] setting minikube options for container-runtime
	I0907 01:27:47.332309  530612 config.go:182] Loaded profile config "calico-690290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 01:27:47.332417  530612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-690290
	I0907 01:27:47.350131  530612 main.go:141] libmachine: Using SSH client type: native
	I0907 01:27:47.350433  530612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0907 01:27:47.350451  530612 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 01:27:47.592598  530612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 01:27:47.592623  530612 machine.go:96] duration metric: took 1.153032422s to provisionDockerMachine
	I0907 01:27:47.592633  530612 client.go:171] duration metric: took 7.752770103s to LocalClient.Create
	I0907 01:27:47.592647  530612 start.go:167] duration metric: took 7.752830551s to libmachine.API.Create "calico-690290"
	I0907 01:27:47.592695  530612 start.go:293] postStartSetup for "calico-690290" (driver="docker")
	I0907 01:27:47.592706  530612 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 01:27:47.592784  530612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 01:27:47.592858  530612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-690290
	I0907 01:27:47.613000  530612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/calico-690290/id_rsa Username:docker}
	I0907 01:27:47.706307  530612 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 01:27:47.709737  530612 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0907 01:27:47.709770  530612 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0907 01:27:47.709780  530612 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0907 01:27:47.709787  530612 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0907 01:27:47.709799  530612 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-294391/.minikube/addons for local assets ...
	I0907 01:27:47.709855  530612 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-294391/.minikube/files for local assets ...
	I0907 01:27:47.709944  530612 filesync.go:149] local asset: /home/jenkins/minikube-integration/21132-294391/.minikube/files/etc/ssl/certs/2962492.pem -> 2962492.pem in /etc/ssl/certs
	I0907 01:27:47.710042  530612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 01:27:47.718699  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/files/etc/ssl/certs/2962492.pem --> /etc/ssl/certs/2962492.pem (1708 bytes)
	I0907 01:27:47.744595  530612 start.go:296] duration metric: took 151.884917ms for postStartSetup
	I0907 01:27:47.745052  530612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-690290
	I0907 01:27:47.761769  530612 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/config.json ...
	I0907 01:27:47.762067  530612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 01:27:47.762123  530612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-690290
	I0907 01:27:47.779365  530612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/calico-690290/id_rsa Username:docker}
	I0907 01:27:47.870543  530612 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0907 01:27:47.875358  530612 start.go:128] duration metric: took 8.039141815s to createHost
	I0907 01:27:47.875385  530612 start.go:83] releasing machines lock for "calico-690290", held for 8.039276799s
	I0907 01:27:47.875486  530612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-690290
	I0907 01:27:47.892669  530612 ssh_runner.go:195] Run: cat /version.json
	I0907 01:27:47.892719  530612 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 01:27:47.892783  530612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-690290
	I0907 01:27:47.892722  530612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-690290
	I0907 01:27:47.917494  530612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/calico-690290/id_rsa Username:docker}
	I0907 01:27:47.927448  530612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/calico-690290/id_rsa Username:docker}
	I0907 01:27:48.144480  530612 ssh_runner.go:195] Run: systemctl --version
	I0907 01:27:48.149044  530612 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 01:27:48.295715  530612 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0907 01:27:48.300005  530612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 01:27:48.323859  530612 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0907 01:27:48.324011  530612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 01:27:48.361159  530612 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0907 01:27:48.361184  530612 start.go:495] detecting cgroup driver to use...
	I0907 01:27:48.361217  530612 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0907 01:27:48.361270  530612 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 01:27:48.378887  530612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 01:27:48.391065  530612 docker.go:218] disabling cri-docker service (if available) ...
	I0907 01:27:48.391172  530612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 01:27:48.406178  530612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 01:27:48.420908  530612 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 01:27:48.504714  530612 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 01:27:48.607631  530612 docker.go:234] disabling docker service ...
	I0907 01:27:48.607731  530612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 01:27:48.630174  530612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 01:27:48.644386  530612 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 01:27:48.741939  530612 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 01:27:48.842237  530612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 01:27:48.853678  530612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 01:27:48.873595  530612 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0907 01:27:48.873658  530612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:27:48.883863  530612 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 01:27:48.883931  530612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:27:48.894227  530612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:27:48.904766  530612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:27:48.914822  530612 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 01:27:48.924672  530612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:27:48.935045  530612 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:27:48.952547  530612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 01:27:48.966889  530612 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 01:27:48.975800  530612 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 01:27:48.984564  530612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 01:27:49.074991  530612 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 01:27:49.203866  530612 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 01:27:49.203992  530612 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 01:27:49.209513  530612 start.go:563] Will wait 60s for crictl version
	I0907 01:27:49.209631  530612 ssh_runner.go:195] Run: which crictl
	I0907 01:27:49.213340  530612 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 01:27:49.256146  530612 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0907 01:27:49.256296  530612 ssh_runner.go:195] Run: crio --version
	I0907 01:27:49.298160  530612 ssh_runner.go:195] Run: crio --version
	I0907 01:27:49.342993  530612 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0907 01:27:49.345835  530612 cli_runner.go:164] Run: docker network inspect calico-690290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0907 01:27:49.361799  530612 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0907 01:27:49.365398  530612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 01:27:49.376092  530612 kubeadm.go:875] updating cluster {Name:calico-690290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-690290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0907 01:27:49.376208  530612 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 01:27:49.376266  530612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 01:27:49.461466  530612 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 01:27:49.461495  530612 crio.go:433] Images already preloaded, skipping extraction
	I0907 01:27:49.461553  530612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 01:27:49.498365  530612 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 01:27:49.498388  530612 cache_images.go:85] Images are preloaded, skipping loading
	I0907 01:27:49.498397  530612 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 crio true true} ...
	I0907 01:27:49.498490  530612 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-690290 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-690290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0907 01:27:49.498576  530612 ssh_runner.go:195] Run: crio config
	I0907 01:27:49.551045  530612 cni.go:84] Creating CNI manager for "calico"
	I0907 01:27:49.551071  530612 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0907 01:27:49.551096  530612 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-690290 NodeName:calico-690290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 01:27:49.551305  530612 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-690290"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 01:27:49.551405  530612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0907 01:27:49.561704  530612 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 01:27:49.561774  530612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 01:27:49.571251  530612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0907 01:27:49.594711  530612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 01:27:49.615199  530612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0907 01:27:49.633715  530612 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0907 01:27:49.637462  530612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 01:27:49.648736  530612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 01:27:49.739516  530612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0907 01:27:49.755022  530612 certs.go:68] Setting up /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290 for IP: 192.168.76.2
	I0907 01:27:49.755048  530612 certs.go:194] generating shared ca certs ...
	I0907 01:27:49.755064  530612 certs.go:226] acquiring lock for ca certs: {Name:mkf2f86d550791cd126f7b3aeff6c351ed5c0816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:27:49.755230  530612 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21132-294391/.minikube/ca.key
	I0907 01:27:49.755285  530612 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.key
	I0907 01:27:49.755293  530612 certs.go:256] generating profile certs ...
	I0907 01:27:49.755368  530612 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/client.key
	I0907 01:27:49.755383  530612 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/client.crt with IP's: []
	I0907 01:27:50.429905  530612 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/client.crt ...
	I0907 01:27:50.429938  530612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/client.crt: {Name:mk3b1023391c7b9cab05ff833ce5fac20559a679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:27:50.430838  530612 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/client.key ...
	I0907 01:27:50.430856  530612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/client.key: {Name:mkbfb0a8e5aa5e6b9875715e10783c69fac6d2a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:27:50.431582  530612 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.key.6c24a702
	I0907 01:27:50.431604  530612 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.crt.6c24a702 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0907 01:27:50.742760  530612 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.crt.6c24a702 ...
	I0907 01:27:50.742792  530612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.crt.6c24a702: {Name:mk7db4abff2965a549f5b5524194c1c560719a74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:27:50.743661  530612 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.key.6c24a702 ...
	I0907 01:27:50.743681  530612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.key.6c24a702: {Name:mkb0435f215c61079df7057cf233090db9280e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:27:50.744348  530612 certs.go:381] copying /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.crt.6c24a702 -> /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.crt
	I0907 01:27:50.744447  530612 certs.go:385] copying /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.key.6c24a702 -> /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.key
	I0907 01:27:50.744512  530612 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/proxy-client.key
	I0907 01:27:50.744531  530612 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/proxy-client.crt with IP's: []
	I0907 01:27:51.287538  530612 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/proxy-client.crt ...
	I0907 01:27:51.287569  530612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/proxy-client.crt: {Name:mk12638aaf6094dc13c1ea8b68f0c694724cae95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:27:51.288404  530612 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/proxy-client.key ...
	I0907 01:27:51.288422  530612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/proxy-client.key: {Name:mkdadd63894439bd74f10388ff78cbb98c776c7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:27:51.288619  530612 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/296249.pem (1338 bytes)
	W0907 01:27:51.288663  530612 certs.go:480] ignoring /home/jenkins/minikube-integration/21132-294391/.minikube/certs/296249_empty.pem, impossibly tiny 0 bytes
	I0907 01:27:51.288678  530612 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 01:27:51.288703  530612 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/ca.pem (1082 bytes)
	I0907 01:27:51.288729  530612 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/cert.pem (1123 bytes)
	I0907 01:27:51.288757  530612 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/certs/key.pem (1675 bytes)
	I0907 01:27:51.288803  530612 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-294391/.minikube/files/etc/ssl/certs/2962492.pem (1708 bytes)
	I0907 01:27:51.289390  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 01:27:51.316806  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0907 01:27:51.342659  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 01:27:51.368559  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 01:27:51.395188  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0907 01:27:51.427535  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 01:27:51.459385  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 01:27:51.486161  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/calico-690290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0907 01:27:51.512261  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/files/etc/ssl/certs/2962492.pem --> /usr/share/ca-certificates/2962492.pem (1708 bytes)
	I0907 01:27:51.536617  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 01:27:51.561254  530612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-294391/.minikube/certs/296249.pem --> /usr/share/ca-certificates/296249.pem (1338 bytes)
	I0907 01:27:51.587082  530612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 01:27:51.607587  530612 ssh_runner.go:195] Run: openssl version
	I0907 01:27:51.613085  530612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2962492.pem && ln -fs /usr/share/ca-certificates/2962492.pem /etc/ssl/certs/2962492.pem"
	I0907 01:27:51.622903  530612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2962492.pem
	I0907 01:27:51.626646  530612 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  7 00:18 /usr/share/ca-certificates/2962492.pem
	I0907 01:27:51.626718  530612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2962492.pem
	I0907 01:27:51.633797  530612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2962492.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 01:27:51.643332  530612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 01:27:51.652713  530612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:27:51.656352  530612 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  7 00:10 /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:27:51.656420  530612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 01:27:51.663548  530612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 01:27:51.673577  530612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296249.pem && ln -fs /usr/share/ca-certificates/296249.pem /etc/ssl/certs/296249.pem"
	I0907 01:27:51.683523  530612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296249.pem
	I0907 01:27:51.687179  530612 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  7 00:18 /usr/share/ca-certificates/296249.pem
	I0907 01:27:51.687244  530612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296249.pem
	I0907 01:27:51.694442  530612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296249.pem /etc/ssl/certs/51391683.0"
	I0907 01:27:51.704570  530612 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0907 01:27:51.707953  530612 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0907 01:27:51.708032  530612 kubeadm.go:392] StartCluster: {Name:calico-690290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-690290 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 01:27:51.708111  530612 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 01:27:51.708190  530612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 01:27:51.752781  530612 cri.go:89] found id: ""
	I0907 01:27:51.752882  530612 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 01:27:51.762226  530612 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 01:27:51.771426  530612 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0907 01:27:51.771523  530612 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 01:27:51.780688  530612 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 01:27:51.780708  530612 kubeadm.go:157] found existing configuration files:
	
	I0907 01:27:51.780759  530612 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0907 01:27:51.789903  530612 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0907 01:27:51.789985  530612 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0907 01:27:51.798678  530612 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0907 01:27:51.809187  530612 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0907 01:27:51.809276  530612 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0907 01:27:51.818774  530612 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0907 01:27:51.827968  530612 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0907 01:27:51.828062  530612 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0907 01:27:51.837005  530612 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0907 01:27:51.846595  530612 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0907 01:27:51.846674  530612 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0907 01:27:51.856324  530612 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0907 01:27:51.900892  530612 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0907 01:27:51.900954  530612 kubeadm.go:310] [preflight] Running pre-flight checks
	I0907 01:27:51.921356  530612 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0907 01:27:51.921435  530612 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0907 01:27:51.921489  530612 kubeadm.go:310] OS: Linux
	I0907 01:27:51.921541  530612 kubeadm.go:310] CGROUPS_CPU: enabled
	I0907 01:27:51.921599  530612 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0907 01:27:51.921651  530612 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0907 01:27:51.921705  530612 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0907 01:27:51.921757  530612 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0907 01:27:51.921810  530612 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0907 01:27:51.921861  530612 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0907 01:27:51.921913  530612 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0907 01:27:51.921965  530612 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0907 01:27:51.994898  530612 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0907 01:27:51.995015  530612 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0907 01:27:51.995111  530612 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0907 01:27:52.005897  530612 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0907 01:27:52.011449  530612 out.go:252]   - Generating certificates and keys ...
	I0907 01:27:52.011553  530612 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0907 01:27:52.011634  530612 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0907 01:27:52.647635  530612 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0907 01:27:52.822479  530612 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0907 01:27:53.526134  530612 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0907 01:27:54.064297  530612 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0907 01:27:54.431151  530612 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0907 01:27:54.431723  530612 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-690290 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0907 01:27:55.123280  530612 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0907 01:27:55.123659  530612 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-690290 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0907 01:27:55.342304  530612 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0907 01:27:55.805624  530612 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0907 01:27:56.285771  530612 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0907 01:27:56.286379  530612 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0907 01:27:56.435909  530612 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0907 01:27:57.308529  530612 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0907 01:27:57.532387  530612 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0907 01:27:58.992616  530612 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0907 01:27:59.217531  530612 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0907 01:27:59.218608  530612 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0907 01:27:59.221708  530612 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0907 01:27:59.225506  530612 out.go:252]   - Booting up control plane ...
	I0907 01:27:59.225618  530612 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0907 01:27:59.225708  530612 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0907 01:27:59.226888  530612 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0907 01:27:59.239698  530612 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0907 01:27:59.239830  530612 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0907 01:27:59.247698  530612 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0907 01:27:59.250760  530612 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0907 01:27:59.250823  530612 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0907 01:27:59.360671  530612 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0907 01:27:59.360797  530612 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0907 01:28:00.366040  530612 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005414204s
	I0907 01:28:00.372082  530612 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0907 01:28:00.372190  530612 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0907 01:28:00.372300  530612 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0907 01:28:00.372393  530612 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0907 01:28:03.640573  530612 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.267057454s
	I0907 01:28:05.253639  530612 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.882231913s
	I0907 01:28:07.379248  530612 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 7.007712878s
	I0907 01:28:07.420638  530612 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0907 01:28:07.483012  530612 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0907 01:28:07.504475  530612 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0907 01:28:07.504669  530612 kubeadm.go:310] [mark-control-plane] Marking the node calico-690290 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0907 01:28:07.534284  530612 kubeadm.go:310] [bootstrap-token] Using token: wa7e53.ngvwqp6ayikql61s
	I0907 01:28:07.537320  530612 out.go:252]   - Configuring RBAC rules ...
	I0907 01:28:07.537439  530612 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0907 01:28:07.551741  530612 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0907 01:28:07.570139  530612 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0907 01:28:07.583229  530612 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0907 01:28:07.598825  530612 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0907 01:28:07.606766  530612 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0907 01:28:07.787373  530612 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0907 01:28:08.228047  530612 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0907 01:28:08.787696  530612 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0907 01:28:08.787722  530612 kubeadm.go:310] 
	I0907 01:28:08.787782  530612 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0907 01:28:08.787792  530612 kubeadm.go:310] 
	I0907 01:28:08.787866  530612 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0907 01:28:08.787874  530612 kubeadm.go:310] 
	I0907 01:28:08.787899  530612 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0907 01:28:08.787959  530612 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0907 01:28:08.788015  530612 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0907 01:28:08.788023  530612 kubeadm.go:310] 
	I0907 01:28:08.788075  530612 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0907 01:28:08.788083  530612 kubeadm.go:310] 
	I0907 01:28:08.788129  530612 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0907 01:28:08.788137  530612 kubeadm.go:310] 
	I0907 01:28:08.788187  530612 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0907 01:28:08.788263  530612 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0907 01:28:08.788334  530612 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0907 01:28:08.788342  530612 kubeadm.go:310] 
	I0907 01:28:08.788423  530612 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0907 01:28:08.788501  530612 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0907 01:28:08.788509  530612 kubeadm.go:310] 
	I0907 01:28:08.788590  530612 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wa7e53.ngvwqp6ayikql61s \
	I0907 01:28:08.788692  530612 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f09d0d7a03ad8280e5c5379592d08528a80ed324cc8775b613706c99ea8527e8 \
	I0907 01:28:08.788716  530612 kubeadm.go:310] 	--control-plane 
	I0907 01:28:08.788725  530612 kubeadm.go:310] 
	I0907 01:28:08.788807  530612 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0907 01:28:08.788838  530612 kubeadm.go:310] 
	I0907 01:28:08.788917  530612 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wa7e53.ngvwqp6ayikql61s \
	I0907 01:28:08.789018  530612 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f09d0d7a03ad8280e5c5379592d08528a80ed324cc8775b613706c99ea8527e8 
	I0907 01:28:08.795095  530612 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0907 01:28:08.795326  530612 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0907 01:28:08.795441  530612 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0907 01:28:08.795466  530612 cni.go:84] Creating CNI manager for "calico"
	I0907 01:28:08.800571  530612 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0907 01:28:08.804103  530612 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0907 01:28:08.804139  530612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0907 01:28:08.826437  530612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0907 01:28:11.869919  530612 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.043449422s)
	I0907 01:28:11.869958  530612 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 01:28:11.870066  530612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:28:11.870152  530612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-690290 minikube.k8s.io/updated_at=2025_09_07T01_28_11_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=196d69ba373adb3ed4fbcc87dc5d81b7f1adbb1d minikube.k8s.io/name=calico-690290 minikube.k8s.io/primary=true
	I0907 01:28:12.305763  530612 ops.go:34] apiserver oom_adj: -16
	I0907 01:28:12.305868  530612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:28:12.806874  530612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:28:13.306857  530612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0907 01:28:13.469219  530612 kubeadm.go:1105] duration metric: took 1.599193077s to wait for elevateKubeSystemPrivileges
	I0907 01:28:13.469246  530612 kubeadm.go:394] duration metric: took 21.761242404s to StartCluster
	I0907 01:28:13.469263  530612 settings.go:142] acquiring lock: {Name:mkd4385cdffa24b1b1c95580709bac830a122e89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:28:13.469326  530612 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 01:28:13.470328  530612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/kubeconfig: {Name:mkff4b98bbe95c3fd7ed7c7c76191ddc1012e81c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 01:28:13.470559  530612 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 01:28:13.470675  530612 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0907 01:28:13.470923  530612 config.go:182] Loaded profile config "calico-690290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 01:28:13.470961  530612 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0907 01:28:13.471044  530612 addons.go:69] Setting storage-provisioner=true in profile "calico-690290"
	I0907 01:28:13.471057  530612 addons.go:238] Setting addon storage-provisioner=true in "calico-690290"
	I0907 01:28:13.471079  530612 host.go:66] Checking if "calico-690290" exists ...
	I0907 01:28:13.471643  530612 cli_runner.go:164] Run: docker container inspect calico-690290 --format={{.State.Status}}
	I0907 01:28:13.472145  530612 addons.go:69] Setting default-storageclass=true in profile "calico-690290"
	I0907 01:28:13.472168  530612 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-690290"
	I0907 01:28:13.472448  530612 cli_runner.go:164] Run: docker container inspect calico-690290 --format={{.State.Status}}
	I0907 01:28:13.475559  530612 out.go:179] * Verifying Kubernetes components...
	I0907 01:28:13.478502  530612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 01:28:13.524941  530612 addons.go:238] Setting addon default-storageclass=true in "calico-690290"
	I0907 01:28:13.524987  530612 host.go:66] Checking if "calico-690290" exists ...
	I0907 01:28:13.525433  530612 cli_runner.go:164] Run: docker container inspect calico-690290 --format={{.State.Status}}
	I0907 01:28:13.528930  530612 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 01:28:13.532898  530612 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 01:28:13.532924  530612 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 01:28:13.532990  530612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-690290
	I0907 01:28:13.553912  530612 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 01:28:13.553934  530612 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 01:28:13.554000  530612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-690290
	I0907 01:28:13.573042  530612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/calico-690290/id_rsa Username:docker}
	I0907 01:28:13.588350  530612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/calico-690290/id_rsa Username:docker}
	I0907 01:28:13.830828  530612 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 01:28:13.900091  530612 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 01:28:13.959634  530612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0907 01:28:13.959876  530612 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0907 01:28:14.595789  530612 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0907 01:28:14.597737  530612 node_ready.go:35] waiting up to 15m0s for node "calico-690290" to be "Ready" ...
	I0907 01:28:14.598859  530612 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0907 01:28:14.601857  530612 addons.go:514] duration metric: took 1.130882769s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0907 01:28:15.100515  530612 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-690290" context rescaled to 1 replicas
	W0907 01:28:16.601527  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:19.101726  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:21.601508  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:24.100855  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:26.101461  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:28.602276  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:31.100897  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:33.101206  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:35.102119  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:37.601048  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:39.601338  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:42.101149  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:44.600733  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:46.600896  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:48.600973  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:51.101914  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:53.601585  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:56.105530  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:28:58.601743  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:01.101268  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:03.600859  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:05.601396  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:07.601742  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:10.101232  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:12.101427  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:14.105159  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:16.605985  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:19.100689  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:21.600500  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:24.101431  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:26.601277  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:28.601493  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:31.101968  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:33.600990  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:35.601297  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:38.100799  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:40.600473  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:42.600668  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:45.101611  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:47.601653  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:50.101332  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:52.601391  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:55.101210  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:57.101403  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:29:59.107113  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:01.602880  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:04.101345  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:06.601746  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:09.101303  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:11.103015  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:13.601524  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:15.601973  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:18.102420  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:20.600774  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:22.601067  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:24.601282  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:27.100712  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:29.101648  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:31.102213  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:33.601981  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:35.602103  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:37.604156  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:40.102226  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:42.600555  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:44.608198  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:47.100993  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:49.101223  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:51.601391  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:53.602098  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:56.101670  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:30:58.601277  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:01.101627  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:03.600592  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:05.601585  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:08.101548  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:10.106522  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:12.601043  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:15.100968  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:17.103462  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:19.601115  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:21.601413  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:23.601541  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:26.100572  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:28.100725  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:30.101649  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:32.601024  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:35.101681  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:37.601979  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:40.100671  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:42.101666  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:44.600537  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:46.600674  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:49.101385  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:51.601035  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:54.101823  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:56.601187  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:31:59.101043  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:01.101359  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:03.101426  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:05.601588  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:08.101258  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:10.600375  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:12.612180  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:15.102552  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:17.601246  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:20.101379  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:22.113617  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:24.600845  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:26.600900  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:28.601315  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:30.601898  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:33.101164  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:35.101231  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:37.101281  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:39.601049  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:42.101680  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:44.600295  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:46.600629  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:48.600704  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:51.100571  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:53.101947  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:55.601053  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:32:58.100703  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:00.134420  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:02.600538  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:04.600649  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:06.601484  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:09.100474  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:11.100665  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:13.101643  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:15.600368  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:17.601442  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:20.101023  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:22.600649  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:24.600730  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:26.600892  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:29.100645  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:31.113358  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:33.601761  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:36.103128  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:38.601427  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:41.100759  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:43.103743  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:45.601710  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:48.101254  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:50.600585  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:52.600923  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:55.101001  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:57.101785  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:33:59.601112  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:02.104791  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:04.605539  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:07.101745  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:09.600532  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:11.618606  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:14.100950  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:16.101357  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:18.101504  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:20.600851  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:22.601293  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:25.100622  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:27.100870  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:29.600716  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:31.601417  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:34.101109  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:36.600601  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:38.601112  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:41.100529  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:43.600621  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:45.600983  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:48.100357  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:50.100700  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:52.101650  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:54.601540  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:57.100745  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:34:59.101133  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:01.101782  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:03.601126  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:06.101063  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:08.101288  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:10.101507  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:12.601264  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:14.601833  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:17.101770  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:19.102855  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:21.601439  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:23.602170  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:26.101189  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:28.601546  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:31.100979  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:33.600687  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:36.100483  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:38.101433  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:40.101543  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:42.600950  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:44.601113  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:46.601723  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:49.101081  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:51.101383  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:53.600387  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:56.100378  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:35:58.101393  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:00.129594  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:02.601361  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:04.601576  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:07.100863  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:09.101113  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:11.600692  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:14.100664  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:16.100952  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:18.601781  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:21.101032  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:23.601441  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:26.100691  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:28.600729  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:30.601007  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:33.100605  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:35.101594  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:37.601334  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:40.100879  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:42.101679  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:44.600658  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:47.101459  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:49.600488  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:51.600980  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:53.601356  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:56.101400  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:36:58.103142  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:00.107542  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:02.600482  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:04.600676  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:06.601118  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:08.601199  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:11.101569  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:13.601389  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:16.101380  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:18.607430  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:21.100471  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:23.101568  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:25.101778  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:27.601449  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:30.100908  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:32.601261  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:35.100649  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:37.600691  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:39.600956  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:41.601319  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:44.100667  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:46.101478  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:48.600557  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:50.601107  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:53.100775  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:55.601011  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:37:57.601322  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:00.110665  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:02.601325  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:05.101242  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:07.600608  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:10.101103  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:12.101567  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:14.600439  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:16.601430  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:19.100342  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:21.100884  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:23.101317  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:25.600999  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:28.100541  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:30.101047  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:32.601016  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:35.100721  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:37.101266  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:39.601465  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:41.601531  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:44.100784  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:46.101680  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:48.600533  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:51.100573  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:53.101228  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:55.600625  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:38:58.100372  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:00.120677  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:02.601679  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:05.101274  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:07.600970  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:09.601203  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:12.101208  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:14.101463  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:16.600613  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:19.100365  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:21.100633  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:23.101291  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:25.600882  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:28.100497  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:30.101411  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:32.101540  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:34.600882  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:36.601187  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:39.100579  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:41.600465  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:44.101519  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:46.600752  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:49.100534  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:51.101320  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:53.600445  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:56.100479  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:39:58.101487  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:00.139976  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:02.601491  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:05.101002  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:07.600571  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:09.600732  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:12.100402  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:14.600529  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:16.600738  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:18.600980  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:21.100497  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:23.100601  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:25.100980  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:27.103689  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:29.601343  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:32.100649  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:34.100784  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:36.600591  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:39.100346  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:41.101487  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:43.600200  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:45.600792  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:47.601198  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:50.101071  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:52.101605  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:54.600649  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:56.600930  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:40:59.105034  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:01.601283  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:04.100726  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:06.100845  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:08.101010  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:10.600621  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:13.100523  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:15.100954  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:17.600522  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:19.601542  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:22.100694  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:24.101694  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:26.600456  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:28.601478  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:31.100657  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:33.101229  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:35.101523  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:37.600477  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:40.101504  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:42.600982  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:45.102129  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:47.600626  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:50.100886  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:52.101537  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:54.600618  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:57.101378  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:41:59.600936  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:02.100513  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:04.101536  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:06.600976  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:09.100836  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:11.600485  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:14.100602  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:16.101475  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:18.600237  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:20.600401  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:22.600701  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:24.601283  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:26.601358  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:29.101254  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:31.600688  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:33.600765  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:36.100511  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:38.101795  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:40.601353  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:43.101131  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:45.102588  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:47.600907  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:49.601518  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:52.100440  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:54.101226  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:56.600244  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:42:58.601233  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:43:01.101547  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:43:03.600657  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:43:05.601391  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:43:08.100531  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:43:10.101703  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	W0907 01:43:12.600848  530612 node_ready.go:57] node "calico-690290" has "Ready":"False" status (will retry)
	I0907 01:43:14.598460  530612 node_ready.go:38] duration metric: took 15m0.000681397s for node "calico-690290" to be "Ready" ...
	I0907 01:43:14.601598  530612 out.go:203] 
	W0907 01:43:14.604490  530612 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0907 01:43:14.604509  530612 out.go:285] * 
	* 
	W0907 01:43:14.606664  530612 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0907 01:43:14.609430  530612 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (935.10s)

                                                
                                    

Test pass (281/325)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.87
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.0/json-events 5.74
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.09
18 TestDownloadOnly/v1.34.0/DeleteAll 0.22
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 179.1
31 TestAddons/serial/GCPAuth/Namespaces 0.22
35 TestAddons/parallel/Registry 18.44
36 TestAddons/parallel/RegistryCreds 0.71
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.83
41 TestAddons/parallel/CSI 45.75
42 TestAddons/parallel/Headlamp 18.03
43 TestAddons/parallel/CloudSpanner 6.6
44 TestAddons/parallel/LocalPath 51.25
45 TestAddons/parallel/NvidiaDevicePlugin 6.59
46 TestAddons/parallel/Yakd 11.86
48 TestAddons/StoppedEnableDisable 12.2
49 TestCertOptions 37.59
50 TestCertExpiration 255.67
52 TestForceSystemdFlag 39.34
53 TestForceSystemdEnv 40.64
59 TestErrorSpam/setup 33.73
60 TestErrorSpam/start 0.83
61 TestErrorSpam/status 1.09
62 TestErrorSpam/pause 1.73
63 TestErrorSpam/unpause 1.93
64 TestErrorSpam/stop 1.49
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 79.11
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 29.69
71 TestFunctional/serial/KubeContext 0.07
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.86
76 TestFunctional/serial/CacheCmd/cache/add_local 1.43
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.03
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 37.61
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.73
87 TestFunctional/serial/LogsFileCmd 1.84
88 TestFunctional/serial/InvalidService 5.13
90 TestFunctional/parallel/ConfigCmd 0.57
92 TestFunctional/parallel/DryRun 0.44
93 TestFunctional/parallel/InternationalLanguage 0.22
94 TestFunctional/parallel/StatusCmd 1.02
99 TestFunctional/parallel/AddonsCmd 0.15
102 TestFunctional/parallel/SSHCmd 0.54
103 TestFunctional/parallel/CpCmd 1.95
105 TestFunctional/parallel/FileSync 0.34
106 TestFunctional/parallel/CertSync 2.23
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
114 TestFunctional/parallel/License 0.32
115 TestFunctional/parallel/Version/short 0.08
116 TestFunctional/parallel/Version/components 1.16
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.05
122 TestFunctional/parallel/ImageCommands/Setup 0.72
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.64
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ServiceCmd/List 0.34
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
150 TestFunctional/parallel/ProfileCmd/profile_list 0.44
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
152 TestFunctional/parallel/MountCmd/any-port 8.6
153 TestFunctional/parallel/MountCmd/specific-port 1.62
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.94
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 199.92
163 TestMultiControlPlane/serial/DeployApp 9.03
164 TestMultiControlPlane/serial/PingHostFromPods 1.63
165 TestMultiControlPlane/serial/AddWorkerNode 58.68
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.98
168 TestMultiControlPlane/serial/CopyFile 19.04
169 TestMultiControlPlane/serial/StopSecondaryNode 12.69
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
171 TestMultiControlPlane/serial/RestartSecondaryNode 32.51
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.26
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 121.9
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.11
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
176 TestMultiControlPlane/serial/StopCluster 25.11
177 TestMultiControlPlane/serial/RestartCluster 91.6
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
179 TestMultiControlPlane/serial/AddSecondaryNode 78.31
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.02
184 TestJSONOutput/start/Command 80.24
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.73
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.67
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.85
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.24
209 TestKicCustomNetwork/create_custom_network 42.25
210 TestKicCustomNetwork/use_default_bridge_network 33.63
211 TestKicExistingNetwork 35.6
212 TestKicCustomSubnet 37.14
213 TestKicStaticIP 31.73
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 68.03
218 TestMountStart/serial/StartWithMountFirst 7.24
219 TestMountStart/serial/VerifyMountFirst 0.25
220 TestMountStart/serial/StartWithMountSecond 7.27
221 TestMountStart/serial/VerifyMountSecond 0.34
222 TestMountStart/serial/DeleteFirst 1.62
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.21
225 TestMountStart/serial/RestartStopped 7.91
226 TestMountStart/serial/VerifyMountPostStop 0.26
229 TestMultiNode/serial/FreshStart2Nodes 137.58
230 TestMultiNode/serial/DeployApp2Nodes 8.12
231 TestMultiNode/serial/PingHostFrom2Pods 0.97
232 TestMultiNode/serial/AddNode 56.33
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.69
235 TestMultiNode/serial/CopyFile 10.01
236 TestMultiNode/serial/StopNode 2.25
237 TestMultiNode/serial/StartAfterStop 8.3
238 TestMultiNode/serial/RestartKeepsNodes 80.46
239 TestMultiNode/serial/DeleteNode 5.54
240 TestMultiNode/serial/StopMultiNode 23.89
241 TestMultiNode/serial/RestartMultiNode 52.97
242 TestMultiNode/serial/ValidateNameConflict 37.21
247 TestPreload 129.45
249 TestScheduledStopUnix 108.22
252 TestInsufficientStorage 10.46
253 TestRunningBinaryUpgrade 52.96
255 TestKubernetesUpgrade 356.8
256 TestMissingContainerUpgrade 123.36
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.36
259 TestNoKubernetes/serial/StartWithK8s 43.62
260 TestNoKubernetes/serial/StartWithStopK8s 17.04
261 TestNoKubernetes/serial/Start 8.8
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
263 TestNoKubernetes/serial/ProfileList 0.67
264 TestNoKubernetes/serial/Stop 1.2
265 TestNoKubernetes/serial/StartNoArgs 6.71
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
267 TestStoppedBinaryUpgrade/Setup 0.75
268 TestStoppedBinaryUpgrade/Upgrade 57.39
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.19
278 TestPause/serial/Start 84.73
279 TestPause/serial/SecondStartNoReconfiguration 26.35
280 TestPause/serial/Pause 1.24
281 TestPause/serial/VerifyStatus 0.42
282 TestPause/serial/Unpause 0.83
283 TestPause/serial/PauseAgain 0.89
284 TestPause/serial/DeletePaused 2.75
285 TestPause/serial/VerifyDeletedResources 0.4
293 TestNetworkPlugins/group/false 5.52
298 TestStartStop/group/old-k8s-version/serial/FirstStart 57.84
299 TestStartStop/group/old-k8s-version/serial/DeployApp 11.45
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
301 TestStartStop/group/old-k8s-version/serial/Stop 11.96
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
303 TestStartStop/group/old-k8s-version/serial/SecondStart 52.6
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
307 TestStartStop/group/old-k8s-version/serial/Pause 3.22
309 TestStartStop/group/no-preload/serial/FirstStart 69.47
311 TestStartStop/group/embed-certs/serial/FirstStart 84.25
312 TestStartStop/group/no-preload/serial/DeployApp 11.43
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.47
314 TestStartStop/group/no-preload/serial/Stop 12.08
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
316 TestStartStop/group/no-preload/serial/SecondStart 49.56
317 TestStartStop/group/embed-certs/serial/DeployApp 11.35
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
320 TestStartStop/group/embed-certs/serial/Stop 12.05
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
323 TestStartStop/group/no-preload/serial/Pause 2.99
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.3
325 TestStartStop/group/embed-certs/serial/SecondStart 55.56
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.09
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
331 TestStartStop/group/embed-certs/serial/Pause 3.13
333 TestStartStop/group/newest-cni/serial/FirstStart 34.4
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.43
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.59
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
339 TestStartStop/group/newest-cni/serial/Stop 1.22
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
341 TestStartStop/group/newest-cni/serial/SecondStart 19.8
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 65.08
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.44
347 TestStartStop/group/newest-cni/serial/Pause 4.77
348 TestNetworkPlugins/group/auto/Start 84.69
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
350 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
351 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
352 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.11
353 TestNetworkPlugins/group/kindnet/Start 80.7
354 TestNetworkPlugins/group/auto/KubeletFlags 0.47
355 TestNetworkPlugins/group/auto/NetCatPod 13.41
356 TestNetworkPlugins/group/auto/DNS 0.21
357 TestNetworkPlugins/group/auto/Localhost 0.16
358 TestNetworkPlugins/group/auto/HairPin 0.17
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
362 TestNetworkPlugins/group/kindnet/NetCatPod 12.44
363 TestNetworkPlugins/group/kindnet/DNS 0.17
364 TestNetworkPlugins/group/kindnet/Localhost 0.17
365 TestNetworkPlugins/group/kindnet/HairPin 0.17
366 TestNetworkPlugins/group/custom-flannel/Start 60.83
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
369 TestNetworkPlugins/group/custom-flannel/DNS 0.18
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
372 TestNetworkPlugins/group/enable-default-cni/Start 78.45
373 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
374 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
375 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
376 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
377 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
378 TestNetworkPlugins/group/flannel/Start 60.71
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
381 TestNetworkPlugins/group/flannel/NetCatPod 10.34
382 TestNetworkPlugins/group/flannel/DNS 0.18
383 TestNetworkPlugins/group/flannel/Localhost 0.16
384 TestNetworkPlugins/group/flannel/HairPin 0.15
385 TestNetworkPlugins/group/bridge/Start 72.04
386 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
387 TestNetworkPlugins/group/bridge/NetCatPod 10.26
388 TestNetworkPlugins/group/bridge/DNS 0.21
389 TestNetworkPlugins/group/bridge/Localhost 0.16
390 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (8.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-023261 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-023261 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.868661778s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0907 00:09:51.482089  296249 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0907 00:09:51.482166  296249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-023261
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-023261: exit status 85 (88.810236ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-023261 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-023261 │ jenkins │ v1.36.0 │ 07 Sep 25 00:09 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/07 00:09:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:09:42.661554  296254 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:09:42.661673  296254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:09:42.661684  296254 out.go:374] Setting ErrFile to fd 2...
	I0907 00:09:42.661689  296254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:09:42.661937  296254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	W0907 00:09:42.662081  296254 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21132-294391/.minikube/config/config.json: open /home/jenkins/minikube-integration/21132-294391/.minikube/config/config.json: no such file or directory
	I0907 00:09:42.662532  296254 out.go:368] Setting JSON to true
	I0907 00:09:42.663362  296254 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6732,"bootTime":1757197051,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0907 00:09:42.663433  296254 start.go:140] virtualization:  
	I0907 00:09:42.667781  296254 out.go:99] [download-only-023261] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	W0907 00:09:42.667971  296254 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball: no such file or directory
	I0907 00:09:42.668094  296254 notify.go:220] Checking for updates...
	I0907 00:09:42.671672  296254 out.go:171] MINIKUBE_LOCATION=21132
	I0907 00:09:42.674766  296254 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:09:42.677696  296254 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 00:09:42.680730  296254 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	I0907 00:09:42.683975  296254 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0907 00:09:42.689866  296254 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0907 00:09:42.690146  296254 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:09:42.722170  296254 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0907 00:09:42.722283  296254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:09:42.773419  296254 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-07 00:09:42.764576004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:09:42.773523  296254 docker.go:318] overlay module found
	I0907 00:09:42.776516  296254 out.go:99] Using the docker driver based on user configuration
	I0907 00:09:42.776562  296254 start.go:304] selected driver: docker
	I0907 00:09:42.776573  296254 start.go:918] validating driver "docker" against <nil>
	I0907 00:09:42.776686  296254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:09:42.830612  296254 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-07 00:09:42.821210568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:09:42.830781  296254 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0907 00:09:42.831045  296254 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0907 00:09:42.831202  296254 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0907 00:09:42.834315  296254 out.go:171] Using Docker driver with root privileges
	I0907 00:09:42.837310  296254 cni.go:84] Creating CNI manager for ""
	I0907 00:09:42.837384  296254 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0907 00:09:42.837397  296254 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0907 00:09:42.837478  296254 start.go:348] cluster config:
	{Name:download-only-023261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-023261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:09:42.840488  296254 out.go:99] Starting "download-only-023261" primary control-plane node in "download-only-023261" cluster
	I0907 00:09:42.840513  296254 cache.go:123] Beginning downloading kic base image for docker with crio
	I0907 00:09:42.843401  296254 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0907 00:09:42.843444  296254 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0907 00:09:42.843617  296254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0907 00:09:42.858429  296254 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0907 00:09:42.859276  296254 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0907 00:09:42.859379  296254 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0907 00:09:42.906213  296254 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0907 00:09:42.906250  296254 cache.go:58] Caching tarball of preloaded images
	I0907 00:09:42.907078  296254 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0907 00:09:42.910353  296254 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0907 00:09:42.910381  296254 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0907 00:09:43.002413  296254 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0907 00:09:46.509570  296254 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0907 00:09:46.509682  296254 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0907 00:09:47.569762  296254 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0907 00:09:47.570159  296254 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/download-only-023261/config.json ...
	I0907 00:09:47.570197  296254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/download-only-023261/config.json: {Name:mk165a545e13310ad691f6b5aaa1ca990c6fc2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:09:47.571114  296254 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0907 00:09:47.571996  296254 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21132-294391/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-023261 host does not exist
	  To start a cluster, run: "minikube start -p download-only-023261"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-023261
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-150717 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-150717 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.743859391s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0907 00:09:57.701340  296249 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0907 00:09:57.701381  296249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-150717
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-150717: exit status 85 (94.223624ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-023261 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-023261 │ jenkins │ v1.36.0 │ 07 Sep 25 00:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.36.0 │ 07 Sep 25 00:09 UTC │ 07 Sep 25 00:09 UTC │
	│ delete  │ -p download-only-023261                                                                                                                                                   │ download-only-023261 │ jenkins │ v1.36.0 │ 07 Sep 25 00:09 UTC │ 07 Sep 25 00:09 UTC │
	│ start   │ -o=json --download-only -p download-only-150717 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-150717 │ jenkins │ v1.36.0 │ 07 Sep 25 00:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/07 00:09:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:09:52.001785  296457 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:09:52.001999  296457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:09:52.002010  296457 out.go:374] Setting ErrFile to fd 2...
	I0907 00:09:52.002015  296457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:09:52.002283  296457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 00:09:52.002699  296457 out.go:368] Setting JSON to true
	I0907 00:09:52.003523  296457 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6741,"bootTime":1757197051,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0907 00:09:52.003603  296457 start.go:140] virtualization:  
	I0907 00:09:52.014506  296457 out.go:99] [download-only-150717] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0907 00:09:52.015002  296457 notify.go:220] Checking for updates...
	I0907 00:09:52.017923  296457 out.go:171] MINIKUBE_LOCATION=21132
	I0907 00:09:52.021227  296457 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:09:52.024187  296457 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 00:09:52.027373  296457 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	I0907 00:09:52.030349  296457 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0907 00:09:52.036306  296457 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0907 00:09:52.036693  296457 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:09:52.071467  296457 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0907 00:09:52.071579  296457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:09:52.130622  296457 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-07 00:09:52.1213555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:09:52.130725  296457 docker.go:318] overlay module found
	I0907 00:09:52.133855  296457 out.go:99] Using the docker driver based on user configuration
	I0907 00:09:52.133896  296457 start.go:304] selected driver: docker
	I0907 00:09:52.133909  296457 start.go:918] validating driver "docker" against <nil>
	I0907 00:09:52.134025  296457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:09:52.190982  296457 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-07 00:09:52.181127765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:09:52.191171  296457 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0907 00:09:52.191481  296457 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0907 00:09:52.191644  296457 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0907 00:09:52.194959  296457 out.go:171] Using Docker driver with root privileges
	I0907 00:09:52.197785  296457 cni.go:84] Creating CNI manager for ""
	I0907 00:09:52.197863  296457 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0907 00:09:52.197880  296457 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0907 00:09:52.197967  296457 start.go:348] cluster config:
	{Name:download-only-150717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-150717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:09:52.200929  296457 out.go:99] Starting "download-only-150717" primary control-plane node in "download-only-150717" cluster
	I0907 00:09:52.200958  296457 cache.go:123] Beginning downloading kic base image for docker with crio
	I0907 00:09:52.203857  296457 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0907 00:09:52.203885  296457 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:09:52.204059  296457 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0907 00:09:52.220307  296457 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0907 00:09:52.220460  296457 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0907 00:09:52.220489  296457 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0907 00:09:52.220498  296457 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0907 00:09:52.220506  296457 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0907 00:09:52.260993  296457 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0907 00:09:52.261028  296457 cache.go:58] Caching tarball of preloaded images
	I0907 00:09:52.261833  296457 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:09:52.264872  296457 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0907 00:09:52.264900  296457 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0907 00:09:52.341413  296457 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:36555bb244eebf6e383c5e8810b48b3a -> /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0907 00:09:55.820560  296457 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0907 00:09:55.820708  296457 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21132-294391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0907 00:09:56.762699  296457 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0907 00:09:56.763060  296457 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/download-only-150717/config.json ...
	I0907 00:09:56.763094  296457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/download-only-150717/config.json: {Name:mk146a6e1ecd32f5b4694b1a61b9a258659dc64d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:09:56.763293  296457 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:09:56.763448  296457 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21132-294391/.minikube/cache/linux/arm64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-150717 host does not exist
	  To start a cluster, run: "minikube start -p download-only-150717"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-150717
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0907 00:09:59.053347  296249 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-905152 --alsologtostderr --binary-mirror http://127.0.0.1:44549 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-905152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-905152
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-055380
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-055380: exit status 85 (71.622033ms)

                                                
                                                
-- stdout --
	* Profile "addons-055380" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-055380"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-055380
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-055380: exit status 85 (78.148205ms)

                                                
                                                
-- stdout --
	* Profile "addons-055380" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-055380"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (179.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-055380 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-055380 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m59.100991751s)
--- PASS: TestAddons/Setup (179.10s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-055380 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-055380 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 12.106688ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-z9b2q" [346ca7f2-e990-422f-8fd9-273339c464ed] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003392867s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-8hhc9" [a68b67f2-7643-4e0c-b24a-2330ae7e633b] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003404644s
addons_test.go:392: (dbg) Run:  kubectl --context addons-055380 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-055380 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-055380 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.270114396s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 ip
2025/09/07 00:13:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.44s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.140232ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-055380
addons_test.go:332: (dbg) Run:  kubectl --context addons-055380 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-lks2d" [e3e5cca5-4ccb-4430-94dd-210de889ef1f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003709459s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.633561ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-ph552" [effa6dcf-0552-42c4-946c-35e72de94049] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003731915s
addons_test.go:463: (dbg) Run:  kubectl --context addons-055380 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0907 00:14:04.195314  296249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0907 00:14:04.202310  296249 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0907 00:14:04.202333  296249 kapi.go:107] duration metric: took 7.03505ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.044486ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-055380 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-055380 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d13ebb06-bb87-416b-aed3-50cc3c30f255] Pending
helpers_test.go:352: "task-pv-pod" [d13ebb06-bb87-416b-aed3-50cc3c30f255] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d13ebb06-bb87-416b-aed3-50cc3c30f255] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003875432s
addons_test.go:572: (dbg) Run:  kubectl --context addons-055380 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-055380 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-055380 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-055380 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-055380 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-055380 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-055380 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [aba6a79c-c9f0-4232-a94e-9f82e8b5e260] Pending
helpers_test.go:352: "task-pv-pod-restore" [aba6a79c-c9f0-4232-a94e-9f82e8b5e260] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [aba6a79c-c9f0-4232-a94e-9f82e8b5e260] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003923311s
addons_test.go:614: (dbg) Run:  kubectl --context addons-055380 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-055380 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-055380 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-055380 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.847453706s)
--- PASS: TestAddons/parallel/CSI (45.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-055380 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-055380 --alsologtostderr -v=1: (1.039597681s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-4fsjz" [33edca7e-cf0b-408d-ac60-511f380756b8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-4fsjz" [33edca7e-cf0b-408d-ac60-511f380756b8] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004169879s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-055380 addons disable headlamp --alsologtostderr -v=1: (5.983055767s)
--- PASS: TestAddons/parallel/Headlamp (18.03s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-qlt44" [0dfaf8ef-1941-4167-86ec-e8213e4e06e8] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003793989s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-055380 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-055380 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-055380 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [54b7a80a-2b9e-499c-96cc-09b8ef6cfe8d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [54b7a80a-2b9e-499c-96cc-09b8ef6cfe8d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [54b7a80a-2b9e-499c-96cc-09b8ef6cfe8d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003674444s
addons_test.go:967: (dbg) Run:  kubectl --context addons-055380 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 ssh "cat /opt/local-path-provisioner/pvc-fd80b47b-65ab-4f10-9a0a-f519bd7a8560_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-055380 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-055380 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-055380 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.975875337s)
--- PASS: TestAddons/parallel/LocalPath (51.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-9gjct" [a67de55a-d27c-42fd-9d05-9a24e7e106de] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003053509s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-vdjfm" [d90c4e63-131d-4c19-bbab-b916a7ad2f2b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003253159s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-055380 addons disable yakd --alsologtostderr -v=1: (5.854022673s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-055380
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-055380: (11.913155205s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-055380
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-055380
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-055380
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (37.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-921503 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-921503 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.909143753s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-921503 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-921503 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-921503 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-921503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-921503
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-921503: (2.001288021s)
--- PASS: TestCertOptions (37.59s)

                                                
                                    
x
+
TestCertExpiration (255.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-375278 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-375278 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.156780413s)
E0907 01:17:59.742951  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-375278 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0907 01:21:06.467871  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-375278 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (32.737097914s)
helpers_test.go:175: Cleaning up "cert-expiration-375278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-375278
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-375278: (2.773481162s)
--- PASS: TestCertExpiration (255.67s)

                                                
                                    
x
+
TestForceSystemdFlag (39.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-140091 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0907 01:16:06.467797  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-140091 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.120292454s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-140091 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-140091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-140091
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-140091: (2.778893306s)
--- PASS: TestForceSystemdFlag (39.34s)

                                                
                                    
x
+
TestForceSystemdEnv (40.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-788041 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-788041 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.030781245s)
helpers_test.go:175: Cleaning up "force-systemd-env-788041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-788041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-788041: (2.613053423s)
--- PASS: TestForceSystemdEnv (40.64s)

                                                
                                    
x
+
TestErrorSpam/setup (33.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-225908 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-225908 --driver=docker  --container-runtime=crio
E0907 00:17:59.751105  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:17:59.757543  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:17:59.768887  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:17:59.795283  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:17:59.836643  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:17:59.918003  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:18:00.079559  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:18:00.401309  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:18:01.043328  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:18:02.324733  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:18:04.886743  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-225908 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-225908 --driver=docker  --container-runtime=crio: (33.734239333s)
--- PASS: TestErrorSpam/setup (33.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 pause
E0907 00:18:10.008657  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 stop: (1.279819326s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-225908 --log_dir /tmp/nospam-225908 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21132-294391/.minikube/files/etc/test/nested/copy/296249/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258398 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0907 00:18:20.250671  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:18:40.732992  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:19:21.695264  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-258398 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.107926s)
--- PASS: TestFunctional/serial/StartWithProxy (79.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.69s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0907 00:19:37.195499  296249 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258398 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-258398 --alsologtostderr -v=8: (29.693404708s)
functional_test.go:678: soft start took 29.693906876s for "functional-258398" cluster.
I0907 00:20:06.889213  296249 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (29.69s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-258398 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 cache add registry.k8s.io/pause:3.1: (1.292720943s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 cache add registry.k8s.io/pause:3.3: (1.298820178s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 cache add registry.k8s.io/pause:latest: (1.263854706s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-258398 /tmp/TestFunctionalserialCacheCmdcacheadd_local1687773565/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 cache add minikube-local-cache-test:functional-258398
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 cache delete minikube-local-cache-test:functional-258398
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-258398
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.528233ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 cache reload: (1.085535535s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 kubectl -- --context functional-258398 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-258398 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258398 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0907 00:20:43.616705  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-258398 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.611295198s)
functional_test.go:776: restart took 37.611443212s for "functional-258398" cluster.
I0907 00:20:52.833053  296249 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (37.61s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-258398 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 logs: (1.728486173s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 logs --file /tmp/TestFunctionalserialLogsFileCmd2192614032/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 logs --file /tmp/TestFunctionalserialLogsFileCmd2192614032/001/logs.txt: (1.841215863s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-258398 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-258398
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-258398: exit status 115 (1.017303811s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30747 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-258398 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 config get cpus: exit status 14 (103.297347ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 config get cpus: exit status 14 (98.103863ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258398 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-258398 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (198.444809ms)

                                                
                                                
-- stdout --
	* [functional-258398] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:31:30.555282  328042 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:31:30.555506  328042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:31:30.555532  328042 out.go:374] Setting ErrFile to fd 2...
	I0907 00:31:30.555550  328042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:31:30.555836  328042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 00:31:30.556265  328042 out.go:368] Setting JSON to false
	I0907 00:31:30.557349  328042 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8040,"bootTime":1757197051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0907 00:31:30.557456  328042 start.go:140] virtualization:  
	I0907 00:31:30.561155  328042 out.go:179] * [functional-258398] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0907 00:31:30.564228  328042 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 00:31:30.564289  328042 notify.go:220] Checking for updates...
	I0907 00:31:30.570059  328042 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:31:30.573010  328042 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 00:31:30.575961  328042 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	I0907 00:31:30.578817  328042 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0907 00:31:30.581705  328042 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:31:30.585195  328042 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:31:30.585834  328042 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:31:30.613728  328042 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0907 00:31:30.613860  328042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:31:30.675807  328042 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-07 00:31:30.666459148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:31:30.675915  328042 docker.go:318] overlay module found
	I0907 00:31:30.680745  328042 out.go:179] * Using the docker driver based on existing profile
	I0907 00:31:30.683519  328042 start.go:304] selected driver: docker
	I0907 00:31:30.683541  328042 start.go:918] validating driver "docker" against &{Name:functional-258398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-258398 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:31:30.683648  328042 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:31:30.687068  328042 out.go:203] 
	W0907 00:31:30.689990  328042 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0907 00:31:30.692802  328042 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258398 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258398 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-258398 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (216.141704ms)

                                                
                                                
-- stdout --
	* [functional-258398] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:31:30.995730  328161 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:31:30.995898  328161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:31:30.995911  328161 out.go:374] Setting ErrFile to fd 2...
	I0907 00:31:30.995918  328161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:31:30.996294  328161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 00:31:30.996655  328161 out.go:368] Setting JSON to false
	I0907 00:31:30.997600  328161 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8040,"bootTime":1757197051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0907 00:31:30.997669  328161 start.go:140] virtualization:  
	I0907 00:31:31.000901  328161 out.go:179] * [functional-258398] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0907 00:31:31.017952  328161 notify.go:220] Checking for updates...
	I0907 00:31:31.018065  328161 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 00:31:31.021171  328161 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:31:31.024111  328161 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 00:31:31.027021  328161 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	I0907 00:31:31.029893  328161 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0907 00:31:31.032938  328161 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:31:31.036462  328161 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:31:31.037142  328161 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:31:31.071178  328161 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0907 00:31:31.071301  328161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:31:31.138155  328161 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-07 00:31:31.128009597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:31:31.138267  328161 docker.go:318] overlay module found
	I0907 00:31:31.141470  328161 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0907 00:31:31.144302  328161 start.go:304] selected driver: docker
	I0907 00:31:31.144322  328161 start.go:918] validating driver "docker" against &{Name:functional-258398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-258398 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:31:31.144432  328161 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:31:31.147897  328161 out.go:203] 
	W0907 00:31:31.150946  328161 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0907 00:31:31.153781  328161 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh -n functional-258398 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 cp functional-258398:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2852010086/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh -n functional-258398 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh -n functional-258398 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/296249/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "sudo cat /etc/test/nested/copy/296249/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/296249.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "sudo cat /etc/ssl/certs/296249.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/296249.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "sudo cat /usr/share/ca-certificates/296249.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2962492.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "sudo cat /etc/ssl/certs/2962492.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2962492.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "sudo cat /usr/share/ca-certificates/2962492.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-258398 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 ssh "sudo systemctl is-active docker": exit status 1 (336.029387ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 ssh "sudo systemctl is-active containerd": exit status 1 (357.698713ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 version -o=json --components: (1.159215696s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-258398 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-258398
localhost/kicbase/echo-server:functional-258398
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-258398 image ls --format short --alsologtostderr:
I0907 00:36:35.734396  329085 out.go:360] Setting OutFile to fd 1 ...
I0907 00:36:35.734605  329085 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:36:35.734637  329085 out.go:374] Setting ErrFile to fd 2...
I0907 00:36:35.734656  329085 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:36:35.734936  329085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
I0907 00:36:35.735703  329085 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:36:35.735919  329085 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:36:35.736468  329085 cli_runner.go:164] Run: docker container inspect functional-258398 --format={{.State.Status}}
I0907 00:36:35.757806  329085 ssh_runner.go:195] Run: systemctl --version
I0907 00:36:35.757914  329085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
I0907 00:36:35.776072  329085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
I0907 00:36:35.865252  329085 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-258398 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/kicbase/echo-server           │ functional-258398  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test     │ functional-258398  │ 2e4e1de285af9 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ localhost/my-image                      │ functional-258398  │ cc22fe6bee7b6 │ 1.64MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ 6fc32d66c1411 │ 75.9MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ d291939e99406 │ 84.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ a25f5ef9c34c3 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ 996be7e86d9b3 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-258398 image ls --format table --alsologtostderr:
I0907 00:36:40.475481  329432 out.go:360] Setting OutFile to fd 1 ...
I0907 00:36:40.475720  329432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:36:40.475747  329432 out.go:374] Setting ErrFile to fd 2...
I0907 00:36:40.475766  329432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:36:40.476059  329432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
I0907 00:36:40.476754  329432 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:36:40.476969  329432 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:36:40.477461  329432 cli_runner.go:164] Run: docker container inspect functional-258398 --format={{.State.Status}}
I0907 00:36:40.495118  329432 ssh_runner.go:195] Run: systemctl --version
I0907 00:36:40.495168  329432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
I0907 00:36:40.514178  329432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
I0907 00:36:40.609493  329432 ssh_runner.go:195] Run: sudo crictl images --output json
E0907 00:37:59.743428  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-258398 image ls --format json --alsologtostderr:
[{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"a392740e12f72cc420ce4d23027479fb215e2767dc4cdf74d86454891c66bbf9","repoDigests":["docker.io/library/a5093614cccff1d3953802b0c39ccd8ac8c44f9bc83458ad08e72719db676339-tmp@sha256:ec6563a40cec13b6bfe788177a215ac3bfee717428a619f6a9b48a115870147e"],"repoTags":[],"size":"1637644"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc
420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"cc22fe6bee7b61e293a6a7f751f9f5b1c359d296cdf8af57c4bce59335e5c6d3","repoDigests":["localhost/my-image@sha256:22411dd4b1eaf6b0e3d15eb39cdf8e5bd1c297753931407edfe2caf0eb94e53d"],"repoTags":["localhost/my-image:functional-258398"],"size":"1640225"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"84818927"},{"id":"6fc32d66c141152245438e6512df7
88cb52d64a1617e33561950b0e7a4675abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"75938711"},{"id":"2e4e1de285af9647a376b662c27c6e0500a5fce25435520ea4752f2108887111","repoDigests":["localhost/minikube-local-cache-test@sha256:e4d9e3bd4adb5f71f4267931d87890d183204c6a28c401c696ef7d6cd33db5a7"],"repoTags":["localhost/minikube-local-cache-test:functional-258398"],"size":"3330"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a25f5ef9c34c37c649f3b4f9631a169221a
c2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"51592021"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDig
ests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-258398"],"size":"4788229"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8d
e77b"],"size":"111333938"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"72629077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-258398 image ls --format json --alsologtostderr:
I0907 00:36:40.246735  329402 out.go:360] Setting OutFile to fd 1 ...
I0907 00:36:40.246854  329402 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:36:40.246865  329402 out.go:374] Setting ErrFile to fd 2...
I0907 00:36:40.246871  329402 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:36:40.247106  329402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
I0907 00:36:40.247741  329402 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:36:40.247887  329402 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:36:40.248329  329402 cli_runner.go:164] Run: docker container inspect functional-258398 --format={{.State.Status}}
I0907 00:36:40.265586  329402 ssh_runner.go:195] Run: systemctl --version
I0907 00:36:40.265641  329402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
I0907 00:36:40.282562  329402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
I0907 00:36:40.373568  329402 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-258398 image ls --format yaml --alsologtostderr:
- id: 2e4e1de285af9647a376b662c27c6e0500a5fce25435520ea4752f2108887111
repoDigests:
- localhost/minikube-local-cache-test@sha256:e4d9e3bd4adb5f71f4267931d87890d183204c6a28c401c696ef7d6cd33db5a7
repoTags:
- localhost/minikube-local-cache-test:functional-258398
size: "3330"
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "84818927"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-258398
size: "4788229"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "72629077"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "75938711"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "51592021"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-258398 image ls --format yaml --alsologtostderr:
I0907 00:36:35.963357  329115 out.go:360] Setting OutFile to fd 1 ...
I0907 00:36:35.963543  329115 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:36:35.963576  329115 out.go:374] Setting ErrFile to fd 2...
I0907 00:36:35.963599  329115 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:36:35.963853  329115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
I0907 00:36:35.964530  329115 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:36:35.964694  329115 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:36:35.965212  329115 cli_runner.go:164] Run: docker container inspect functional-258398 --format={{.State.Status}}
I0907 00:36:35.982292  329115 ssh_runner.go:195] Run: systemctl --version
I0907 00:36:35.982355  329115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
I0907 00:36:36.005254  329115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
I0907 00:36:36.093485  329115 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 ssh pgrep buildkitd: exit status 1 (259.154213ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image build -t localhost/my-image:functional-258398 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 image build -t localhost/my-image:functional-258398 testdata/build --alsologtostderr: (3.529178792s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-258398 image build -t localhost/my-image:functional-258398 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a392740e12f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-258398
--> cc22fe6bee7
Successfully tagged localhost/my-image:functional-258398
cc22fe6bee7b61e293a6a7f751f9f5b1c359d296cdf8af57c4bce59335e5c6d3
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-258398 image build -t localhost/my-image:functional-258398 testdata/build --alsologtostderr:
I0907 00:36:36.459604  329204 out.go:360] Setting OutFile to fd 1 ...
I0907 00:36:36.460280  329204 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:36:36.460300  329204 out.go:374] Setting ErrFile to fd 2...
I0907 00:36:36.460306  329204 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0907 00:36:36.460614  329204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
I0907 00:36:36.461405  329204 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:36:36.462085  329204 config.go:182] Loaded profile config "functional-258398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0907 00:36:36.462639  329204 cli_runner.go:164] Run: docker container inspect functional-258398 --format={{.State.Status}}
I0907 00:36:36.480846  329204 ssh_runner.go:195] Run: systemctl --version
I0907 00:36:36.480908  329204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258398
I0907 00:36:36.498954  329204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/functional-258398/id_rsa Username:docker}
I0907 00:36:36.589595  329204 build_images.go:161] Building image from path: /tmp/build.1993912935.tar
I0907 00:36:36.589664  329204 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0907 00:36:36.599518  329204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1993912935.tar
I0907 00:36:36.603845  329204 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1993912935.tar: stat -c "%s %y" /var/lib/minikube/build/build.1993912935.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1993912935.tar': No such file or directory
I0907 00:36:36.603876  329204 ssh_runner.go:362] scp /tmp/build.1993912935.tar --> /var/lib/minikube/build/build.1993912935.tar (3072 bytes)
I0907 00:36:36.628601  329204 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1993912935
I0907 00:36:36.637822  329204 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1993912935 -xf /var/lib/minikube/build/build.1993912935.tar
I0907 00:36:36.647216  329204 crio.go:315] Building image: /var/lib/minikube/build/build.1993912935
I0907 00:36:36.647297  329204 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-258398 /var/lib/minikube/build/build.1993912935 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0907 00:36:39.905052  329204 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-258398 /var/lib/minikube/build/build.1993912935 --cgroup-manager=cgroupfs: (3.257728524s)
I0907 00:36:39.905122  329204 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1993912935
I0907 00:36:39.913993  329204 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1993912935.tar
I0907 00:36:39.922943  329204 build_images.go:217] Built localhost/my-image:functional-258398 from /tmp/build.1993912935.tar
I0907 00:36:39.922991  329204 build_images.go:133] succeeded building to: functional-258398
I0907 00:36:39.922997  329204 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-258398
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image load --daemon kicbase/echo-server:functional-258398 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-258398 image load --daemon kicbase/echo-server:functional-258398 --alsologtostderr: (1.315315725s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image load --daemon kicbase/echo-server:functional-258398 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-258398
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image load --daemon kicbase/echo-server:functional-258398 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image save kicbase/echo-server:functional-258398 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image rm kicbase/echo-server:functional-258398 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-258398
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 image save --daemon kicbase/echo-server:functional-258398 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-258398
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-258398 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-258398 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-258398 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 323788: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-258398 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-258398 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-258398 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 service list -o json
functional_test.go:1504: Took "334.687682ms" to run "out/minikube-linux-arm64 -p functional-258398 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "374.978599ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "62.201282ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "353.489485ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "64.087661ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-258398 /tmp/TestFunctionalparallelMountCmdany-port1538918264/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757205078345806115" to /tmp/TestFunctionalparallelMountCmdany-port1538918264/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757205078345806115" to /tmp/TestFunctionalparallelMountCmdany-port1538918264/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757205078345806115" to /tmp/TestFunctionalparallelMountCmdany-port1538918264/001/test-1757205078345806115
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.171221ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0907 00:31:18.678200  296249 retry.go:31] will retry after 311.594199ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  7 00:31 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  7 00:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  7 00:31 test-1757205078345806115
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh cat /mount-9p/test-1757205078345806115
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-258398 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [212f4324-f818-4b0d-b83e-8e29dcb1a377] Pending
helpers_test.go:352: "busybox-mount" [212f4324-f818-4b0d-b83e-8e29dcb1a377] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [212f4324-f818-4b0d-b83e-8e29dcb1a377] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [212f4324-f818-4b0d-b83e-8e29dcb1a377] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003000298s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-258398 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258398 /tmp/TestFunctionalparallelMountCmdany-port1538918264/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-258398 /tmp/TestFunctionalparallelMountCmdspecific-port3352215716/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (341.114797ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0907 00:31:27.284144  296249 retry.go:31] will retry after 252.039622ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258398 /tmp/TestFunctionalparallelMountCmdspecific-port3352215716/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 ssh "sudo umount -f /mount-9p": exit status 1 (289.930986ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-258398 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258398 /tmp/TestFunctionalparallelMountCmdspecific-port3352215716/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258398 ssh "findmnt -T" /mount1: exit status 1 (598.050542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0907 00:31:29.165796  296249 retry.go:31] will retry after 381.653683ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-258398 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-258398 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258398 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3408236880/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-258398
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-258398
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-258398
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0907 00:42:59.743506  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m19.118979582s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (199.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 kubectl -- rollout status deployment/busybox: (5.753981467s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-5t8ft -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-9gq8k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-fvgk6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-5t8ft -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-9gq8k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-fvgk6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-5t8ft -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-9gq8k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-fvgk6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-5t8ft -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-5t8ft -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-9gq8k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-9gq8k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-fvgk6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 kubectl -- exec busybox-7b57f96db7-fvgk6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 node add --alsologtostderr -v 5: (57.674972849s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5: (1.001523292s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-852567 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp testdata/cp-test.txt ha-852567:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1361311298/001/cp-test_ha-852567.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567:/home/docker/cp-test.txt ha-852567-m02:/home/docker/cp-test_ha-852567_ha-852567-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m02 "sudo cat /home/docker/cp-test_ha-852567_ha-852567-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567:/home/docker/cp-test.txt ha-852567-m03:/home/docker/cp-test_ha-852567_ha-852567-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m03 "sudo cat /home/docker/cp-test_ha-852567_ha-852567-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567:/home/docker/cp-test.txt ha-852567-m04:/home/docker/cp-test_ha-852567_ha-852567-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m04 "sudo cat /home/docker/cp-test_ha-852567_ha-852567-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp testdata/cp-test.txt ha-852567-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1361311298/001/cp-test_ha-852567-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m02:/home/docker/cp-test.txt ha-852567:/home/docker/cp-test_ha-852567-m02_ha-852567.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567 "sudo cat /home/docker/cp-test_ha-852567-m02_ha-852567.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m02:/home/docker/cp-test.txt ha-852567-m03:/home/docker/cp-test_ha-852567-m02_ha-852567-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m03 "sudo cat /home/docker/cp-test_ha-852567-m02_ha-852567-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m02:/home/docker/cp-test.txt ha-852567-m04:/home/docker/cp-test_ha-852567-m02_ha-852567-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m04 "sudo cat /home/docker/cp-test_ha-852567-m02_ha-852567-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp testdata/cp-test.txt ha-852567-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1361311298/001/cp-test_ha-852567-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m03:/home/docker/cp-test.txt ha-852567:/home/docker/cp-test_ha-852567-m03_ha-852567.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567 "sudo cat /home/docker/cp-test_ha-852567-m03_ha-852567.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m03:/home/docker/cp-test.txt ha-852567-m02:/home/docker/cp-test_ha-852567-m03_ha-852567-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m02 "sudo cat /home/docker/cp-test_ha-852567-m03_ha-852567-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m03:/home/docker/cp-test.txt ha-852567-m04:/home/docker/cp-test_ha-852567-m03_ha-852567-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m04 "sudo cat /home/docker/cp-test_ha-852567-m03_ha-852567-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp testdata/cp-test.txt ha-852567-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1361311298/001/cp-test_ha-852567-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m04:/home/docker/cp-test.txt ha-852567:/home/docker/cp-test_ha-852567-m04_ha-852567.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567 "sudo cat /home/docker/cp-test_ha-852567-m04_ha-852567.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m04:/home/docker/cp-test.txt ha-852567-m02:/home/docker/cp-test_ha-852567-m04_ha-852567-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m02 "sudo cat /home/docker/cp-test_ha-852567-m04_ha-852567-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 cp ha-852567-m04:/home/docker/cp-test.txt ha-852567-m03:/home/docker/cp-test_ha-852567-m04_ha-852567-m03.txt
E0907 00:46:06.467727  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:46:06.474179  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:46:06.485533  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:46:06.506989  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:46:06.548446  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:46:06.629867  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:46:06.791313  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m04 "sudo cat /home/docker/cp-test.txt"
E0907 00:46:07.113383  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 ssh -n ha-852567-m03 "sudo cat /home/docker/cp-test_ha-852567-m04_ha-852567-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 node stop m02 --alsologtostderr -v 5
E0907 00:46:07.756032  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:46:09.037867  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:46:11.599319  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:46:16.720602  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 node stop m02 --alsologtostderr -v 5: (11.946385232s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5: exit status 7 (741.377867ms)

                                                
                                                
-- stdout --
	ha-852567
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-852567-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-852567-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-852567-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:46:19.466411  345636 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:46:19.466529  345636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:46:19.466535  345636 out.go:374] Setting ErrFile to fd 2...
	I0907 00:46:19.466540  345636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:46:19.466802  345636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 00:46:19.466987  345636 out.go:368] Setting JSON to false
	I0907 00:46:19.467076  345636 mustload.go:65] Loading cluster: ha-852567
	I0907 00:46:19.467149  345636 notify.go:220] Checking for updates...
	I0907 00:46:19.468436  345636 config.go:182] Loaded profile config "ha-852567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:46:19.468461  345636 status.go:174] checking status of ha-852567 ...
	I0907 00:46:19.469228  345636 cli_runner.go:164] Run: docker container inspect ha-852567 --format={{.State.Status}}
	I0907 00:46:19.490230  345636 status.go:371] ha-852567 host status = "Running" (err=<nil>)
	I0907 00:46:19.490254  345636 host.go:66] Checking if "ha-852567" exists ...
	I0907 00:46:19.490669  345636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-852567
	I0907 00:46:19.523703  345636 host.go:66] Checking if "ha-852567" exists ...
	I0907 00:46:19.524064  345636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:46:19.524118  345636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-852567
	I0907 00:46:19.543846  345636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/ha-852567/id_rsa Username:docker}
	I0907 00:46:19.649658  345636 ssh_runner.go:195] Run: systemctl --version
	I0907 00:46:19.654417  345636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:46:19.668317  345636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 00:46:19.738765  345636 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-07 00:46:19.72964106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 00:46:19.739304  345636 kubeconfig.go:125] found "ha-852567" server: "https://192.168.49.254:8443"
	I0907 00:46:19.739339  345636 api_server.go:166] Checking apiserver status ...
	I0907 00:46:19.739381  345636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:46:19.750578  345636 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1474/cgroup
	I0907 00:46:19.760281  345636 api_server.go:182] apiserver freezer: "2:freezer:/docker/9c701bc6fd7e022e65744464b0207284dd9508bfb9222e089b87c8d923613ee1/crio/crio-dd599e0a51263bf6ad0a11dd36d4e011b5057b2c12886e58449bb97b0997505e"
	I0907 00:46:19.760350  345636 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9c701bc6fd7e022e65744464b0207284dd9508bfb9222e089b87c8d923613ee1/crio/crio-dd599e0a51263bf6ad0a11dd36d4e011b5057b2c12886e58449bb97b0997505e/freezer.state
	I0907 00:46:19.769181  345636 api_server.go:204] freezer state: "THAWED"
	I0907 00:46:19.769206  345636 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0907 00:46:19.777846  345636 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0907 00:46:19.777874  345636 status.go:463] ha-852567 apiserver status = Running (err=<nil>)
	I0907 00:46:19.777885  345636 status.go:176] ha-852567 status: &{Name:ha-852567 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:46:19.777901  345636 status.go:174] checking status of ha-852567-m02 ...
	I0907 00:46:19.778197  345636 cli_runner.go:164] Run: docker container inspect ha-852567-m02 --format={{.State.Status}}
	I0907 00:46:19.796724  345636 status.go:371] ha-852567-m02 host status = "Stopped" (err=<nil>)
	I0907 00:46:19.796749  345636 status.go:384] host is not running, skipping remaining checks
	I0907 00:46:19.796755  345636 status.go:176] ha-852567-m02 status: &{Name:ha-852567-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:46:19.796788  345636 status.go:174] checking status of ha-852567-m03 ...
	I0907 00:46:19.797139  345636 cli_runner.go:164] Run: docker container inspect ha-852567-m03 --format={{.State.Status}}
	I0907 00:46:19.814538  345636 status.go:371] ha-852567-m03 host status = "Running" (err=<nil>)
	I0907 00:46:19.814571  345636 host.go:66] Checking if "ha-852567-m03" exists ...
	I0907 00:46:19.814908  345636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-852567-m03
	I0907 00:46:19.831653  345636 host.go:66] Checking if "ha-852567-m03" exists ...
	I0907 00:46:19.831958  345636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:46:19.832037  345636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-852567-m03
	I0907 00:46:19.849458  345636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/ha-852567-m03/id_rsa Username:docker}
	I0907 00:46:19.941815  345636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:46:19.954317  345636 kubeconfig.go:125] found "ha-852567" server: "https://192.168.49.254:8443"
	I0907 00:46:19.954350  345636 api_server.go:166] Checking apiserver status ...
	I0907 00:46:19.954392  345636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:46:19.964895  345636 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1349/cgroup
	I0907 00:46:19.974425  345636 api_server.go:182] apiserver freezer: "2:freezer:/docker/cb8d43a9471c728932b55d02b53d12f28a8013c190f692d423df5c693c3e9dc0/crio/crio-f6752755b37be3b098e2517d04a8f5e62b6407de77b00455809645612f94abae"
	I0907 00:46:19.974549  345636 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cb8d43a9471c728932b55d02b53d12f28a8013c190f692d423df5c693c3e9dc0/crio/crio-f6752755b37be3b098e2517d04a8f5e62b6407de77b00455809645612f94abae/freezer.state
	I0907 00:46:19.983273  345636 api_server.go:204] freezer state: "THAWED"
	I0907 00:46:19.983303  345636 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0907 00:46:19.991595  345636 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0907 00:46:19.991622  345636 status.go:463] ha-852567-m03 apiserver status = Running (err=<nil>)
	I0907 00:46:19.991631  345636 status.go:176] ha-852567-m03 status: &{Name:ha-852567-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:46:19.991676  345636 status.go:174] checking status of ha-852567-m04 ...
	I0907 00:46:19.991993  345636 cli_runner.go:164] Run: docker container inspect ha-852567-m04 --format={{.State.Status}}
	I0907 00:46:20.013015  345636 status.go:371] ha-852567-m04 host status = "Running" (err=<nil>)
	I0907 00:46:20.013045  345636 host.go:66] Checking if "ha-852567-m04" exists ...
	I0907 00:46:20.013369  345636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-852567-m04
	I0907 00:46:20.032653  345636 host.go:66] Checking if "ha-852567-m04" exists ...
	I0907 00:46:20.033153  345636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:46:20.033216  345636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-852567-m04
	I0907 00:46:20.053430  345636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/ha-852567-m04/id_rsa Username:docker}
	I0907 00:46:20.142025  345636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:46:20.155169  345636 status.go:176] ha-852567-m04 status: &{Name:ha-852567-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 node start m02 --alsologtostderr -v 5
E0907 00:46:26.962889  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:46:47.444706  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 node start m02 --alsologtostderr -v 5: (30.954984889s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5: (1.401487596s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.258157428s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (121.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 stop --alsologtostderr -v 5
E0907 00:47:28.406095  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 stop --alsologtostderr -v 5: (36.838754684s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 start --wait true --alsologtostderr -v 5
E0907 00:47:59.743403  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:48:50.328168  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 start --wait true --alsologtostderr -v 5: (1m24.873086045s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (121.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 node delete m03 --alsologtostderr -v 5: (11.173948289s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (25.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 stop --alsologtostderr -v 5: (24.986923977s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5: exit status 7 (120.266106ms)

                                                
                                                
-- stdout --
	ha-852567
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-852567-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-852567-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:49:34.492257  359543 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:49:34.492374  359543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:49:34.492384  359543 out.go:374] Setting ErrFile to fd 2...
	I0907 00:49:34.492391  359543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:49:34.492637  359543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 00:49:34.492863  359543 out.go:368] Setting JSON to false
	I0907 00:49:34.492918  359543 mustload.go:65] Loading cluster: ha-852567
	I0907 00:49:34.492991  359543 notify.go:220] Checking for updates...
	I0907 00:49:34.493938  359543 config.go:182] Loaded profile config "ha-852567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:49:34.493967  359543 status.go:174] checking status of ha-852567 ...
	I0907 00:49:34.494483  359543 cli_runner.go:164] Run: docker container inspect ha-852567 --format={{.State.Status}}
	I0907 00:49:34.513090  359543 status.go:371] ha-852567 host status = "Stopped" (err=<nil>)
	I0907 00:49:34.513113  359543 status.go:384] host is not running, skipping remaining checks
	I0907 00:49:34.513120  359543 status.go:176] ha-852567 status: &{Name:ha-852567 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:49:34.513225  359543 status.go:174] checking status of ha-852567-m02 ...
	I0907 00:49:34.513594  359543 cli_runner.go:164] Run: docker container inspect ha-852567-m02 --format={{.State.Status}}
	I0907 00:49:34.541547  359543 status.go:371] ha-852567-m02 host status = "Stopped" (err=<nil>)
	I0907 00:49:34.541572  359543 status.go:384] host is not running, skipping remaining checks
	I0907 00:49:34.541580  359543 status.go:176] ha-852567-m02 status: &{Name:ha-852567-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:49:34.541600  359543 status.go:174] checking status of ha-852567-m04 ...
	I0907 00:49:34.541894  359543 cli_runner.go:164] Run: docker container inspect ha-852567-m04 --format={{.State.Status}}
	I0907 00:49:34.559036  359543 status.go:371] ha-852567-m04 host status = "Stopped" (err=<nil>)
	I0907 00:49:34.559060  359543 status.go:384] host is not running, skipping remaining checks
	I0907 00:49:34.559068  359543 status.go:176] ha-852567-m04 status: &{Name:ha-852567-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (25.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (91.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0907 00:51:02.821326  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m30.69414411s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (91.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0907 00:51:06.468258  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 node add --control-plane --alsologtostderr -v 5
E0907 00:51:34.173837  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 node add --control-plane --alsologtostderr -v 5: (1m17.286744215s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-852567 status --alsologtostderr -v 5: (1.020561533s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.019642876s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-876101 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0907 00:52:59.744422  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-876101 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.235885339s)
--- PASS: TestJSONOutput/start/Command (80.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-876101 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-876101 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-876101 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-876101 --output=json --user=testUser: (5.85211999s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-769832 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-769832 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.621527ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f161d8ba-88c2-461f-b85e-f1ace2e68caf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-769832] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b595ac90-aa28-422b-bfa2-3998ae09f629","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21132"}}
	{"specversion":"1.0","id":"b48a02b3-e65a-4723-9efb-e59406c34151","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ea8c42d6-aebd-4ed2-af18-21bd0ad1c7ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig"}}
	{"specversion":"1.0","id":"c3c4c710-d414-40d1-8d6b-165969374387","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube"}}
	{"specversion":"1.0","id":"9caa2ed0-5c70-4b62-b9a1-85f5dc7c83bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3da50be8-a43d-4ceb-998f-c1e671f57773","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"82bec886-2573-4b4f-865d-7500c7e2c7ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-769832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-769832
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-868513 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-868513 --network=: (40.165479166s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-868513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-868513
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-868513: (2.058696389s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.25s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-613577 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-613577 --network=bridge: (31.693118164s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-613577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-613577
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-613577: (1.909209921s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.63s)

                                                
                                    
x
+
TestKicExistingNetwork (35.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0907 00:55:22.645838  296249 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0907 00:55:22.662589  296249 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0907 00:55:22.663415  296249 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0907 00:55:22.663448  296249 cli_runner.go:164] Run: docker network inspect existing-network
W0907 00:55:22.679583  296249 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0907 00:55:22.679622  296249 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0907 00:55:22.679635  296249 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0907 00:55:22.679751  296249 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0907 00:55:22.698467  296249 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-94b882556325 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:a1:d5:d3:ef:e7} reservation:<nil>}
I0907 00:55:22.698861  296249 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017e3e60}
I0907 00:55:22.698921  296249 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0907 00:55:22.698974  296249 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0907 00:55:22.766479  296249 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-504420 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-504420 --network=existing-network: (33.499467133s)
helpers_test.go:175: Cleaning up "existing-network-504420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-504420
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-504420: (1.943853492s)
I0907 00:55:58.226052  296249 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.60s)

                                                
                                    
x
+
TestKicCustomSubnet (37.14s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-886107 --subnet=192.168.60.0/24
E0907 00:56:06.472982  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-886107 --subnet=192.168.60.0/24: (35.062179107s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-886107 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-886107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-886107
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-886107: (2.046954302s)
--- PASS: TestKicCustomSubnet (37.14s)

                                                
                                    
x
+
TestKicStaticIP (31.73s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-557434 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-557434 --static-ip=192.168.200.200: (29.331345514s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-557434 ip
helpers_test.go:175: Cleaning up "static-ip-557434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-557434
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-557434: (2.224266594s)
--- PASS: TestKicStaticIP (31.73s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-547415 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-547415 --driver=docker  --container-runtime=crio: (30.966699456s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-549843 --driver=docker  --container-runtime=crio
E0907 00:57:59.743171  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-549843 --driver=docker  --container-runtime=crio: (31.818399478s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-547415
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-549843
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-549843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-549843
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-549843: (1.942193672s)
helpers_test.go:175: Cleaning up "first-547415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-547415
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-547415: (1.936985285s)
--- PASS: TestMinikubeProfile (68.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-557660 --memory=3072 --mount-string /tmp/TestMountStartserial19927234/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-557660 --memory=3072 --mount-string /tmp/TestMountStartserial19927234/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.231808241s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-557660 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-559605 --memory=3072 --mount-string /tmp/TestMountStartserial19927234/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-559605 --memory=3072 --mount-string /tmp/TestMountStartserial19927234/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.269315483s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-559605 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-557660 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-557660 --alsologtostderr -v=5: (1.615346979s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-559605 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-559605
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-559605: (1.212211087s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-559605
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-559605: (6.906739301s)
--- PASS: TestMountStart/serial/RestartStopped (7.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-559605 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812300 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812300 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.988831663s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- rollout status deployment/busybox
E0907 01:01:06.467848  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-812300 -- rollout status deployment/busybox: (5.899916581s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- exec busybox-7b57f96db7-77xqz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- exec busybox-7b57f96db7-scs6c -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- exec busybox-7b57f96db7-77xqz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- exec busybox-7b57f96db7-scs6c -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- exec busybox-7b57f96db7-77xqz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- exec busybox-7b57f96db7-scs6c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- exec busybox-7b57f96db7-77xqz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- exec busybox-7b57f96db7-77xqz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- exec busybox-7b57f96db7-scs6c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812300 -- exec busybox-7b57f96db7-scs6c -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-812300 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-812300 -v=5 --alsologtostderr: (55.655673071s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-812300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp testdata/cp-test.txt multinode-812300:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp multinode-812300:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2797565319/001/cp-test_multinode-812300.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp multinode-812300:/home/docker/cp-test.txt multinode-812300-m02:/home/docker/cp-test_multinode-812300_multinode-812300-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m02 "sudo cat /home/docker/cp-test_multinode-812300_multinode-812300-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp multinode-812300:/home/docker/cp-test.txt multinode-812300-m03:/home/docker/cp-test_multinode-812300_multinode-812300-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m03 "sudo cat /home/docker/cp-test_multinode-812300_multinode-812300-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp testdata/cp-test.txt multinode-812300-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp multinode-812300-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2797565319/001/cp-test_multinode-812300-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp multinode-812300-m02:/home/docker/cp-test.txt multinode-812300:/home/docker/cp-test_multinode-812300-m02_multinode-812300.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300 "sudo cat /home/docker/cp-test_multinode-812300-m02_multinode-812300.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp multinode-812300-m02:/home/docker/cp-test.txt multinode-812300-m03:/home/docker/cp-test_multinode-812300-m02_multinode-812300-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m03 "sudo cat /home/docker/cp-test_multinode-812300-m02_multinode-812300-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp testdata/cp-test.txt multinode-812300-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp multinode-812300-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2797565319/001/cp-test_multinode-812300-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp multinode-812300-m03:/home/docker/cp-test.txt multinode-812300:/home/docker/cp-test_multinode-812300-m03_multinode-812300.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300 "sudo cat /home/docker/cp-test_multinode-812300-m03_multinode-812300.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 cp multinode-812300-m03:/home/docker/cp-test.txt multinode-812300-m02:/home/docker/cp-test_multinode-812300-m03_multinode-812300-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 ssh -n multinode-812300-m02 "sudo cat /home/docker/cp-test_multinode-812300-m03_multinode-812300-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-812300 node stop m03: (1.219242373s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812300 status: exit status 7 (510.051718ms)

                                                
                                                
-- stdout --
	multinode-812300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-812300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-812300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812300 status --alsologtostderr: exit status 7 (522.573246ms)

                                                
                                                
-- stdout --
	multinode-812300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-812300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-812300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 01:02:18.921001  412952 out.go:360] Setting OutFile to fd 1 ...
	I0907 01:02:18.921188  412952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 01:02:18.921215  412952 out.go:374] Setting ErrFile to fd 2...
	I0907 01:02:18.921232  412952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 01:02:18.921514  412952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 01:02:18.921744  412952 out.go:368] Setting JSON to false
	I0907 01:02:18.921810  412952 mustload.go:65] Loading cluster: multinode-812300
	I0907 01:02:18.921901  412952 notify.go:220] Checking for updates...
	I0907 01:02:18.923525  412952 config.go:182] Loaded profile config "multinode-812300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 01:02:18.923588  412952 status.go:174] checking status of multinode-812300 ...
	I0907 01:02:18.925959  412952 cli_runner.go:164] Run: docker container inspect multinode-812300 --format={{.State.Status}}
	I0907 01:02:18.944468  412952 status.go:371] multinode-812300 host status = "Running" (err=<nil>)
	I0907 01:02:18.944491  412952 host.go:66] Checking if "multinode-812300" exists ...
	I0907 01:02:18.944802  412952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-812300
	I0907 01:02:18.969577  412952 host.go:66] Checking if "multinode-812300" exists ...
	I0907 01:02:18.969895  412952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 01:02:18.969944  412952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-812300
	I0907 01:02:18.988485  412952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/multinode-812300/id_rsa Username:docker}
	I0907 01:02:19.078573  412952 ssh_runner.go:195] Run: systemctl --version
	I0907 01:02:19.082939  412952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 01:02:19.095947  412952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 01:02:19.169881  412952 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-07 01:02:19.159157671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 01:02:19.170431  412952 kubeconfig.go:125] found "multinode-812300" server: "https://192.168.67.2:8443"
	I0907 01:02:19.170470  412952 api_server.go:166] Checking apiserver status ...
	I0907 01:02:19.170520  412952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 01:02:19.182303  412952 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup
	I0907 01:02:19.192062  412952 api_server.go:182] apiserver freezer: "2:freezer:/docker/ca28bb03886de0af7d41922941245e95a2c391bd2e4e849a89eb9d888177b613/crio/crio-7d3aebaa38686d38f89720d6a03ede4bacddc524478c1eb05d6089a87f98d641"
	I0907 01:02:19.192135  412952 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ca28bb03886de0af7d41922941245e95a2c391bd2e4e849a89eb9d888177b613/crio/crio-7d3aebaa38686d38f89720d6a03ede4bacddc524478c1eb05d6089a87f98d641/freezer.state
	I0907 01:02:19.201469  412952 api_server.go:204] freezer state: "THAWED"
	I0907 01:02:19.201495  412952 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0907 01:02:19.209721  412952 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0907 01:02:19.209750  412952 status.go:463] multinode-812300 apiserver status = Running (err=<nil>)
	I0907 01:02:19.209761  412952 status.go:176] multinode-812300 status: &{Name:multinode-812300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 01:02:19.209785  412952 status.go:174] checking status of multinode-812300-m02 ...
	I0907 01:02:19.210118  412952 cli_runner.go:164] Run: docker container inspect multinode-812300-m02 --format={{.State.Status}}
	I0907 01:02:19.227219  412952 status.go:371] multinode-812300-m02 host status = "Running" (err=<nil>)
	I0907 01:02:19.227244  412952 host.go:66] Checking if "multinode-812300-m02" exists ...
	I0907 01:02:19.227565  412952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-812300-m02
	I0907 01:02:19.245204  412952 host.go:66] Checking if "multinode-812300-m02" exists ...
	I0907 01:02:19.245527  412952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 01:02:19.245578  412952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-812300-m02
	I0907 01:02:19.267636  412952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21132-294391/.minikube/machines/multinode-812300-m02/id_rsa Username:docker}
	I0907 01:02:19.353990  412952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 01:02:19.365720  412952 status.go:176] multinode-812300-m02 status: &{Name:multinode-812300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0907 01:02:19.365754  412952 status.go:174] checking status of multinode-812300-m03 ...
	I0907 01:02:19.366053  412952 cli_runner.go:164] Run: docker container inspect multinode-812300-m03 --format={{.State.Status}}
	I0907 01:02:19.382325  412952 status.go:371] multinode-812300-m03 host status = "Stopped" (err=<nil>)
	I0907 01:02:19.382349  412952 status.go:384] host is not running, skipping remaining checks
	I0907 01:02:19.382357  412952 status.go:176] multinode-812300-m03 status: &{Name:multinode-812300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-812300 node start m03 -v=5 --alsologtostderr: (7.555909252s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-812300
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-812300
E0907 01:02:29.535245  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-812300: (24.861379097s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812300 --wait=true -v=5 --alsologtostderr
E0907 01:02:59.743255  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812300 --wait=true -v=5 --alsologtostderr: (55.470695463s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-812300
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-812300 node delete m03: (4.868296517s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-812300 stop: (23.694229788s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812300 status: exit status 7 (102.276761ms)

                                                
                                                
-- stdout --
	multinode-812300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-812300-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812300 status --alsologtostderr: exit status 7 (96.236899ms)

                                                
                                                
-- stdout --
	multinode-812300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-812300-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 01:04:17.539832  420943 out.go:360] Setting OutFile to fd 1 ...
	I0907 01:04:17.539949  420943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 01:04:17.539960  420943 out.go:374] Setting ErrFile to fd 2...
	I0907 01:04:17.539966  420943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 01:04:17.540334  420943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 01:04:17.540576  420943 out.go:368] Setting JSON to false
	I0907 01:04:17.540610  420943 mustload.go:65] Loading cluster: multinode-812300
	I0907 01:04:17.541544  420943 config.go:182] Loaded profile config "multinode-812300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 01:04:17.541606  420943 status.go:174] checking status of multinode-812300 ...
	I0907 01:04:17.541813  420943 notify.go:220] Checking for updates...
	I0907 01:04:17.542261  420943 cli_runner.go:164] Run: docker container inspect multinode-812300 --format={{.State.Status}}
	I0907 01:04:17.561243  420943 status.go:371] multinode-812300 host status = "Stopped" (err=<nil>)
	I0907 01:04:17.561269  420943 status.go:384] host is not running, skipping remaining checks
	I0907 01:04:17.561277  420943 status.go:176] multinode-812300 status: &{Name:multinode-812300 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 01:04:17.561309  420943 status.go:174] checking status of multinode-812300-m02 ...
	I0907 01:04:17.561621  420943 cli_runner.go:164] Run: docker container inspect multinode-812300-m02 --format={{.State.Status}}
	I0907 01:04:17.586483  420943 status.go:371] multinode-812300-m02 host status = "Stopped" (err=<nil>)
	I0907 01:04:17.586511  420943 status.go:384] host is not running, skipping remaining checks
	I0907 01:04:17.586518  420943 status.go:176] multinode-812300-m02 status: &{Name:multinode-812300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812300 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812300 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (52.261030166s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812300 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-812300
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812300-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-812300-m02 --driver=docker  --container-runtime=crio: exit status 14 (101.058893ms)

                                                
                                                
-- stdout --
	* [multinode-812300-m02] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-812300-m02' is duplicated with machine name 'multinode-812300-m02' in profile 'multinode-812300'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812300-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812300-m03 --driver=docker  --container-runtime=crio: (34.75192638s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-812300
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-812300: exit status 80 (343.172392ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-812300 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-812300-m03 already exists in multinode-812300-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-812300-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-812300-m03: (1.945694441s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.21s)

                                                
                                    
x
+
TestPreload (129.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-506384 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0907 01:06:06.468389  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-506384 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m0.71161148s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-506384 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-506384 image pull gcr.io/k8s-minikube/busybox: (3.801788717s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-506384
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-506384: (5.782424187s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-506384 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0907 01:07:42.823053  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-506384 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (56.461493885s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-506384 image list
helpers_test.go:175: Cleaning up "test-preload-506384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-506384
E0907 01:07:59.743010  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-506384: (2.450785206s)
--- PASS: TestPreload (129.45s)

                                                
                                    
x
+
TestScheduledStopUnix (108.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-013709 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-013709 --memory=3072 --driver=docker  --container-runtime=crio: (31.587071502s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-013709 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-013709 -n scheduled-stop-013709
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-013709 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0907 01:08:33.402169  296249 retry.go:31] will retry after 117.106µs: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.403354  296249 retry.go:31] will retry after 156.081µs: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.404420  296249 retry.go:31] will retry after 283.47µs: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.405510  296249 retry.go:31] will retry after 229.591µs: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.406613  296249 retry.go:31] will retry after 570.237µs: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.407718  296249 retry.go:31] will retry after 1.037886ms: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.408857  296249 retry.go:31] will retry after 1.171948ms: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.410999  296249 retry.go:31] will retry after 2.502024ms: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.414111  296249 retry.go:31] will retry after 3.83182ms: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.418298  296249 retry.go:31] will retry after 3.460761ms: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.422458  296249 retry.go:31] will retry after 6.619154ms: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.429617  296249 retry.go:31] will retry after 8.6379ms: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.438840  296249 retry.go:31] will retry after 14.218257ms: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.456345  296249 retry.go:31] will retry after 20.263053ms: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
I0907 01:08:33.477608  296249 retry.go:31] will retry after 38.344907ms: open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/scheduled-stop-013709/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-013709 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-013709 -n scheduled-stop-013709
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-013709
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-013709 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-013709
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-013709: exit status 7 (75.618657ms)

                                                
                                                
-- stdout --
	scheduled-stop-013709
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-013709 -n scheduled-stop-013709
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-013709 -n scheduled-stop-013709: exit status 7 (74.387031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-013709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-013709
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-013709: (5.095696786s)
--- PASS: TestScheduledStopUnix (108.22s)

                                                
                                    
x
+
TestInsufficientStorage (10.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-088751 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-088751 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.996541914s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8dbc4a03-8424-4585-a368-906cbb493afc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-088751] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b3ce50e-b375-4ced-bb6c-0d7e5d32d43b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21132"}}
	{"specversion":"1.0","id":"eaf92f46-54d1-499d-900b-883c395cc227","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1b86f3ea-7beb-4d4a-81dd-1aca66e224b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig"}}
	{"specversion":"1.0","id":"cab4e9f6-ddf6-42ba-897f-7e80a0d4f09d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube"}}
	{"specversion":"1.0","id":"6439ac13-7cc1-434f-af3d-6a3974f3dca1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2b8f3335-bf36-454a-bd1e-0968b979dd09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"654b15b0-6dd7-4470-993b-34e1a1c01406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d95fb94b-5264-4b43-a309-0f9bd1194703","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"df71434c-b313-4f7d-a88e-42fdca1a65aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"db48730c-ea86-41f8-80e4-3d99ee0d0b88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e9279047-326c-4eed-9738-0c2fc842a463","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-088751\" primary control-plane node in \"insufficient-storage-088751\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"77bcbe57-966b-47e4-9acc-ed25c5a83e9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"55d43098-06ab-4a57-a386-a92180c1dd1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"19289c92-9dce-4601-974a-fff7a39f86d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-088751 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-088751 --output=json --layout=cluster: exit status 7 (287.341352ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-088751","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-088751","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 01:09:57.802477  438416 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-088751" does not appear in /home/jenkins/minikube-integration/21132-294391/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-088751 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-088751 --output=json --layout=cluster: exit status 7 (280.915649ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-088751","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-088751","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0907 01:09:58.084153  438478 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-088751" does not appear in /home/jenkins/minikube-integration/21132-294391/kubeconfig
	E0907 01:09:58.094806  438478 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/insufficient-storage-088751/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-088751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-088751
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-088751: (1.887738854s)
--- PASS: TestInsufficientStorage (10.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (52.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2684416396 start -p running-upgrade-980660 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2684416396 start -p running-upgrade-980660 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.048291458s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-980660 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-980660 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.898420573s)
helpers_test.go:175: Cleaning up "running-upgrade-980660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-980660
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-980660: (2.18170045s)
--- PASS: TestRunningBinaryUpgrade (52.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (356.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-183607 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-183607 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.603011964s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-183607
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-183607: (1.348463841s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-183607 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-183607 status --format={{.Host}}: exit status 7 (100.397453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-183607 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-183607 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.020375839s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-183607 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-183607 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-183607 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (124.954554ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-183607] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-183607
	    minikube start -p kubernetes-upgrade-183607 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1836072 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-183607 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-183607 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-183607 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.70091457s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-183607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-183607
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-183607: (2.784917596s)
--- PASS: TestKubernetesUpgrade (356.80s)

                                                
                                    
x
+
TestMissingContainerUpgrade (123.36s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2760727755 start -p missing-upgrade-859941 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2760727755 start -p missing-upgrade-859941 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.343694887s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-859941
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-859941
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-859941 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0907 01:11:06.468380  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-859941 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.551302929s)
helpers_test.go:175: Cleaning up "missing-upgrade-859941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-859941
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-859941: (2.535241876s)
--- PASS: TestMissingContainerUpgrade (123.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-114149 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-114149 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (357.938499ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-114149] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-114149 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-114149 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.175025479s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-114149 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-114149 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-114149 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (13.982099401s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-114149 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-114149 status -o json: exit status 2 (468.589419ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-114149","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-114149
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-114149: (2.588925296s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-114149 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-114149 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.797589115s)
--- PASS: TestNoKubernetes/serial/Start (8.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-114149 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-114149 "sudo systemctl is-active --quiet service kubelet": exit status 1 (255.691356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-114149
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-114149: (1.201460507s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-114149 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-114149 --driver=docker  --container-runtime=crio: (6.707815083s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-114149 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-114149 "sudo systemctl is-active --quiet service kubelet": exit status 1 (251.466405ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (57.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3010931808 start -p stopped-upgrade-975132 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3010931808 start -p stopped-upgrade-975132 --memory=3072 --vm-driver=docker  --container-runtime=crio: (37.3365847s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3010931808 -p stopped-upgrade-975132 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3010931808 -p stopped-upgrade-975132 stop: (1.232428825s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-975132 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0907 01:12:59.743120  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-975132 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.824048719s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (57.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-975132
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-975132: (1.194319198s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                    
x
+
TestPause/serial/Start (84.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-744223 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-744223 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.733280965s)
--- PASS: TestPause/serial/Start (84.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.35s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-744223 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-744223 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.321045695s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.35s)

                                                
                                    
x
+
TestPause/serial/Pause (1.24s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-744223 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-744223 --alsologtostderr -v=5: (1.238192946s)
--- PASS: TestPause/serial/Pause (1.24s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-744223 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-744223 --output=json --layout=cluster: exit status 2 (416.747726ms)

                                                
                                                
-- stdout --
	{"Name":"pause-744223","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-744223","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-744223 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-744223 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-744223 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-744223 --alsologtostderr -v=5: (2.75082903s)
--- PASS: TestPause/serial/DeletePaused (2.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-744223
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-744223: exit status 1 (16.368642ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-744223: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-690290 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-690290 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (209.287811ms)

                                                
                                                
-- stdout --
	* [false-690290] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 01:16:40.307383  475885 out.go:360] Setting OutFile to fd 1 ...
	I0907 01:16:40.307597  475885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 01:16:40.307623  475885 out.go:374] Setting ErrFile to fd 2...
	I0907 01:16:40.307642  475885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 01:16:40.307992  475885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-294391/.minikube/bin
	I0907 01:16:40.308466  475885 out.go:368] Setting JSON to false
	I0907 01:16:40.309476  475885 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10750,"bootTime":1757197051,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0907 01:16:40.309603  475885 start.go:140] virtualization:  
	I0907 01:16:40.315255  475885 out.go:179] * [false-690290] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0907 01:16:40.318470  475885 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 01:16:40.318547  475885 notify.go:220] Checking for updates...
	I0907 01:16:40.322396  475885 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 01:16:40.325269  475885 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-294391/kubeconfig
	I0907 01:16:40.328108  475885 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-294391/.minikube
	I0907 01:16:40.332454  475885 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0907 01:16:40.335624  475885 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 01:16:40.339125  475885 config.go:182] Loaded profile config "kubernetes-upgrade-183607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 01:16:40.339241  475885 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 01:16:40.372949  475885 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0907 01:16:40.373085  475885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0907 01:16:40.437275  475885 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-07 01:16:40.426218695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0907 01:16:40.437392  475885 docker.go:318] overlay module found
	I0907 01:16:40.440379  475885 out.go:179] * Using the docker driver based on user configuration
	I0907 01:16:40.443272  475885 start.go:304] selected driver: docker
	I0907 01:16:40.443293  475885 start.go:918] validating driver "docker" against <nil>
	I0907 01:16:40.443308  475885 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 01:16:40.446725  475885 out.go:203] 
	W0907 01:16:40.449573  475885 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0907 01:16:40.452386  475885 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-690290 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-690290" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-690290" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Sep 2025 01:16:41 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-183607
contexts:
- context:
cluster: kubernetes-upgrade-183607
extensions:
- extension:
last-update: Sun, 07 Sep 2025 01:16:41 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: kubernetes-upgrade-183607
name: kubernetes-upgrade-183607
current-context: kubernetes-upgrade-183607
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-183607
user:
client-certificate: /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kubernetes-upgrade-183607/client.crt
client-key: /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kubernetes-upgrade-183607/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-690290

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-690290"

                                                
                                                
----------------------- debugLogs end: false-690290 [took: 5.108687216s] --------------------------------
helpers_test.go:175: Cleaning up "false-690290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-690290
--- PASS: TestNetworkPlugins/group/false (5.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (57.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-228090 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-228090 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (57.833060931s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (57.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-228090 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fe59fbd5-72c2-47d5-b0cf-f8f8facf519b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0907 01:19:09.536970  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [fe59fbd5-72c2-47d5-b0cf-f8f8facf519b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.007886234s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-228090 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-228090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-228090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.000120026s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-228090 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-228090 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-228090 --alsologtostderr -v=3: (11.956526807s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-228090 -n old-k8s-version-228090
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-228090 -n old-k8s-version-228090: exit status 7 (79.610754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-228090 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-228090 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-228090 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.225081945s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-228090 -n old-k8s-version-228090
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-stt97" [05e953b3-27cd-4205-b7b9-d1a3ad5ba826] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003208196s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-stt97" [05e953b3-27cd-4205-b7b9-d1a3ad5ba826] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003311099s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-228090 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-228090 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-228090 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-228090 -n old-k8s-version-228090
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-228090 -n old-k8s-version-228090: exit status 2 (321.133255ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-228090 -n old-k8s-version-228090
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-228090 -n old-k8s-version-228090: exit status 2 (317.366791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-228090 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-228090 -n old-k8s-version-228090
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-228090 -n old-k8s-version-228090
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-984732 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-984732 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m9.465880539s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-746251 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-746251 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m24.245822952s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-984732 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [95a87e6c-4582-47ff-aa76-7cec8fe5c384] Pending
helpers_test.go:352: "busybox" [95a87e6c-4582-47ff-aa76-7cec8fe5c384] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [95a87e6c-4582-47ff-aa76-7cec8fe5c384] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003876072s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-984732 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-984732 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-984732 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.325022316s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-984732 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-984732 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-984732 --alsologtostderr -v=3: (12.081661748s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-984732 -n no-preload-984732
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-984732 -n no-preload-984732: exit status 7 (81.102704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-984732 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-984732 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-984732 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (49.178251965s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-984732 -n no-preload-984732
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-746251 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [43d37fa5-c100-45ce-ae99-5a0529691714] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0907 01:22:59.743761  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [43d37fa5-c100-45ce-ae99-5a0529691714] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003376344s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-746251 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8g7nk" [5cc8d975-6b0c-4175-bac9-6d786fed238c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003619503s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-746251 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-746251 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-746251 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-746251 --alsologtostderr -v=3: (12.053108726s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8g7nk" [5cc8d975-6b0c-4175-bac9-6d786fed238c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004498986s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-984732 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-984732 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-984732 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-984732 -n no-preload-984732
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-984732 -n no-preload-984732: exit status 2 (313.233013ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-984732 -n no-preload-984732
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-984732 -n no-preload-984732: exit status 2 (301.49684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-984732 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-984732 -n no-preload-984732
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-984732 -n no-preload-984732
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-746251 -n embed-certs-746251
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-746251 -n embed-certs-746251: exit status 7 (86.958557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-746251 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-746251 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-746251 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (55.196357787s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-746251 -n embed-certs-746251
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-522814 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0907 01:24:07.829232  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:24:07.835600  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:24:07.847251  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:24:07.868626  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:24:07.910111  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:24:07.991495  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:24:08.153603  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:24:08.475526  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:24:09.117637  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:24:10.399626  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:24:12.961369  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-522814 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m28.091555544s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8vbbp" [9dec463d-2ef2-48ef-9274-504f37c464b6] Running
E0907 01:24:18.083362  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:24:22.824843  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003183936s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8vbbp" [9dec463d-2ef2-48ef-9274-504f37c464b6] Running
E0907 01:24:28.324650  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002614007s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-746251 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-746251 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-746251 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-746251 -n embed-certs-746251
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-746251 -n embed-certs-746251: exit status 2 (379.041293ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-746251 -n embed-certs-746251
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-746251 -n embed-certs-746251: exit status 2 (349.383159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-746251 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-746251 -n embed-certs-746251
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-746251 -n embed-certs-746251
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-925675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0907 01:24:48.806355  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-925675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (34.396458899s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-522814 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7ae94471-fd6e-455c-9870-1b8fc9b1c1bd] Pending
helpers_test.go:352: "busybox" [7ae94471-fd6e-455c-9870-1b8fc9b1c1bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7ae94471-fd6e-455c-9870-1b8fc9b1c1bd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004847326s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-522814 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-522814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-522814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.388096862s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-522814 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-522814 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-522814 --alsologtostderr -v=3: (12.089291888s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-925675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-925675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.159064649s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-925675 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-925675 --alsologtostderr -v=3: (1.222020599s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-925675 -n newest-cni-925675
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-925675 -n newest-cni-925675: exit status 7 (78.567068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-925675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-925675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-925675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (19.19694557s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-925675 -n newest-cni-925675
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-522814 -n default-k8s-diff-port-522814
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-522814 -n default-k8s-diff-port-522814: exit status 7 (77.146814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-522814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-522814 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0907 01:25:29.767766  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-522814 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m4.694401695s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-522814 -n default-k8s-diff-port-522814
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-925675 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-925675 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-925675 --alsologtostderr -v=1: (1.44737666s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-925675 -n newest-cni-925675
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-925675 -n newest-cni-925675: exit status 2 (448.618776ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-925675 -n newest-cni-925675
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-925675 -n newest-cni-925675: exit status 2 (471.863457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-925675 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-925675 --alsologtostderr -v=1: (1.120350196s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-925675 -n newest-cni-925675
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-925675 -n newest-cni-925675
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0907 01:26:06.468351  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m24.685809831s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dvrjz" [dc7da8e6-6ce7-4510-97ce-5401a33d594a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004961046s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dvrjz" [dc7da8e6-6ce7-4510-97ce-5401a33d594a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004495785s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-522814 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-522814 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-522814 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-522814 -n default-k8s-diff-port-522814
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-522814 -n default-k8s-diff-port-522814: exit status 2 (330.428825ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-522814 -n default-k8s-diff-port-522814
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-522814 -n default-k8s-diff-port-522814: exit status 2 (311.85536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-522814 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-522814 -n default-k8s-diff-port-522814
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-522814 -n default-k8s-diff-port-522814
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.11s)
E0907 01:35:44.797621  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:35:49.538752  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:03.666890  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:06.468508  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:34.276984  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:34.283334  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:34.294703  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:34.316204  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:34.357682  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:34.439206  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:34.600690  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:34.922420  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:35.564494  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:36.846183  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:39.408515  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:44.530233  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:52.629957  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:36:54.772582  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:37:05.526753  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/auto-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:37:15.254255  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:37:25.588562  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:37:33.228953  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/auto-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:37:56.215606  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:37:59.743023  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:00.936772  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:06.767669  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:06.774042  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:06.785522  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:06.806872  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:06.848342  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:06.929804  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:07.091390  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:07.413079  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:08.055295  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:09.336596  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:11.898500  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:17.020343  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:27.262457  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:28.639302  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:38:47.743909  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:07.829620  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:18.137640  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:28.705305  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:41.727812  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:53.432770  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/default-k8s-diff-port-522814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:57.998501  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:58.005593  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:58.017135  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:58.038631  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:58.080040  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:58.161781  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:58.323273  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:58.644676  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:39:59.286657  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:40:00.576512  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:40:03.138070  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:40:08.259573  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:40:09.430136  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:40:18.500861  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:40:30.892863  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:40:38.982217  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:40:50.626739  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:41:02.827204  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:41:06.468533  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:41:19.944156  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:41:34.276980  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:41:52.630115  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:42:01.979643  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/enable-default-cni-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:42:05.526744  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/auto-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:42:41.866020  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/bridge-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:42:59.742935  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:43:00.936949  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:43:06.767970  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0907 01:26:51.689797  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:26:52.630076  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:26:52.636490  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:26:52.647851  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:26:52.669234  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:26:52.710571  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:26:52.791942  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:26:52.953730  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:26:53.275835  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:26:53.917956  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:26:55.199307  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:26:57.760945  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:27:02.882541  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m20.703560637s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-690290 "pgrep -a kubelet"
I0907 01:27:05.147598  296249 config.go:182] Loaded profile config "auto-690290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-690290 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-txj6q" [d8a48a34-63de-46f8-b84e-6bc9442fa8cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-txj6q" [d8a48a34-63de-46f8-b84e-6bc9442fa8cc] Running
E0907 01:27:13.124423  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.003412304s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-690290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-bfm7h" [895ac4de-584a-47c3-b95b-09410c209a3d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004295857s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-690290 "pgrep -a kubelet"
I0907 01:28:07.218063  296249 config.go:182] Loaded profile config "kindnet-690290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-690290 replace --force -f testdata/netcat-deployment.yaml
I0907 01:28:07.649562  296249 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kz8tv" [244acb4b-c1b8-495a-b54d-f76476b191c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kz8tv" [244acb4b-c1b8-495a-b54d-f76476b191c0] Running
E0907 01:28:14.567203  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003191888s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-690290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0907 01:29:07.829304  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:29:35.531373  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:29:36.488577  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.82668922s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-690290 "pgrep -a kubelet"
I0907 01:29:41.467755  296249 config.go:182] Loaded profile config "custom-flannel-690290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-690290 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wdqm4" [08736525-04c5-4883-9f54-f7fcc434e1f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wdqm4" [08736525-04c5-4883-9f54-f7fcc434e1f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003117098s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-690290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0907 01:30:34.410205  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/default-k8s-diff-port-522814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:31:06.468035  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/functional-258398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:31:15.371438  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/default-k8s-diff-port-522814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m18.445500567s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-690290 "pgrep -a kubelet"
I0907 01:31:34.036048  296249 config.go:182] Loaded profile config "enable-default-cni-690290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-690290 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rz7kq" [e555366f-30d8-470a-866e-4288baeda751] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rz7kq" [e555366f-30d8-470a-866e-4288baeda751] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006766181s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-690290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0907 01:32:06.171409  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/auto-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:32:06.813676  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/auto-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:32:08.095551  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/auto-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:32:10.657741  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/auto-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:32:15.779481  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/auto-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:32:20.330757  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/no-preload-984732/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:32:26.020907  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/auto-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:32:37.293749  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/default-k8s-diff-port-522814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:32:46.502926  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/auto-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:32:59.742944  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/addons-055380/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:33:00.936926  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:33:00.943305  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:33:00.954743  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:33:00.976132  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:33:01.017615  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:33:01.098979  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:33:01.260619  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:33:01.582273  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:33:02.224104  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:33:03.505858  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:33:06.067671  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m0.708025279s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-zgfhn" [53f87e76-7bcc-4a29-acff-fdc774369d2c] Running
E0907 01:33:11.190026  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00448155s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-690290 "pgrep -a kubelet"
I0907 01:33:13.049606  296249 config.go:182] Loaded profile config "flannel-690290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-690290 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2v744" [94b96aeb-7fc1-419c-b4fd-347ba8928abc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2v744" [94b96aeb-7fc1-419c-b4fd-347ba8928abc] Running
E0907 01:33:21.432105  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003948334s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-690290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0907 01:34:07.829112  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/old-k8s-version-228090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:22.875822  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kindnet-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:41.728325  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:41.734732  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:41.746049  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:41.767503  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:41.808973  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:41.890396  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:42.051717  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:42.373334  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:43.015524  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:44.297727  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:46.859770  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:49.387585  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/auto-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:51.981421  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:34:53.433074  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/default-k8s-diff-port-522814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-690290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m12.044046636s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-690290 "pgrep -a kubelet"
I0907 01:34:57.763343  296249 config.go:182] Loaded profile config "bridge-690290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-690290 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4dfr2" [25b856a5-86b3-46a7-94fa-293813f91d5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0907 01:35:02.223289  296249 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/custom-flannel-690290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-4dfr2" [25b856a5-86b3-46a7-94fa-293813f91d5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005036545s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-690290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-690290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (32/325)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.6s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-005112 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-005112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-005112
--- SKIP: TestDownloadOnlyKic (0.60s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-055380 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-344924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-344924
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-690290 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-690290" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-690290" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Sep 2025 01:12:17 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-183607
contexts:
- context:
cluster: kubernetes-upgrade-183607
user: kubernetes-upgrade-183607
name: kubernetes-upgrade-183607
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-183607
user:
client-certificate: /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kubernetes-upgrade-183607/client.crt
client-key: /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kubernetes-upgrade-183607/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-690290

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-690290"

                                                
                                                
----------------------- debugLogs end: kubenet-690290 [took: 5.248221389s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-690290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-690290
--- SKIP: TestNetworkPlugins/group/kubenet (5.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-690290 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-690290" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21132-294391/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Sep 2025 01:16:41 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-183607
contexts:
- context:
cluster: kubernetes-upgrade-183607
extensions:
- extension:
last-update: Sun, 07 Sep 2025 01:16:41 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: kubernetes-upgrade-183607
name: kubernetes-upgrade-183607
current-context: kubernetes-upgrade-183607
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-183607
user:
client-certificate: /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kubernetes-upgrade-183607/client.crt
client-key: /home/jenkins/minikube-integration/21132-294391/.minikube/profiles/kubernetes-upgrade-183607/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-690290

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-690290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-690290"

                                                
                                                
----------------------- debugLogs end: cilium-690290 [took: 5.590035491s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-690290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-690290
--- SKIP: TestNetworkPlugins/group/cilium (5.77s)

                                                
                                    
Copied to clipboard