Test Report: Docker_Linux_crio_arm64 21631

                    
                      b128d3d4cdbb5b7aeeced7d5ab95296ac270db89:2025-10-01:41714
                    
                

Test fail (6/332)

Order failed test Duration
37 TestAddons/parallel/Ingress 153.01
98 TestFunctional/parallel/ServiceCmdConnect 603.8
126 TestFunctional/parallel/ServiceCmd/DeployApp 601.1
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
136 TestFunctional/parallel/ServiceCmd/Format 0.55
137 TestFunctional/parallel/ServiceCmd/URL 0.54
x
+
TestAddons/parallel/Ingress (153.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-157757 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-157757 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-157757 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [06830bfc-6663-43de-a719-0c64f448207e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [06830bfc-6663-43de-a719-0c64f448207e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00319432s
I1001 18:37:20.243151  290016 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-157757 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.193827189s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-157757 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-157757
helpers_test.go:243: (dbg) docker inspect addons-157757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "44a9a4951ad6a86227aa1cd3c9bfca87327cdd08180e3cfe242f227318753879",
	        "Created": "2025-10-01T18:32:49.878859705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-01T18:32:49.943645597Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/44a9a4951ad6a86227aa1cd3c9bfca87327cdd08180e3cfe242f227318753879/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/44a9a4951ad6a86227aa1cd3c9bfca87327cdd08180e3cfe242f227318753879/hostname",
	        "HostsPath": "/var/lib/docker/containers/44a9a4951ad6a86227aa1cd3c9bfca87327cdd08180e3cfe242f227318753879/hosts",
	        "LogPath": "/var/lib/docker/containers/44a9a4951ad6a86227aa1cd3c9bfca87327cdd08180e3cfe242f227318753879/44a9a4951ad6a86227aa1cd3c9bfca87327cdd08180e3cfe242f227318753879-json.log",
	        "Name": "/addons-157757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-157757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-157757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "44a9a4951ad6a86227aa1cd3c9bfca87327cdd08180e3cfe242f227318753879",
	                "LowerDir": "/var/lib/docker/overlay2/98fcc0d2f9f252ead9ab915834b64119d418fbe64b8689606f3e6a82b0028972-init/diff:/var/lib/docker/overlay2/346fb2e4be8ca49e66f0777a766be9ef323e3747b8e386ae9882fb8153286814/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98fcc0d2f9f252ead9ab915834b64119d418fbe64b8689606f3e6a82b0028972/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98fcc0d2f9f252ead9ab915834b64119d418fbe64b8689606f3e6a82b0028972/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98fcc0d2f9f252ead9ab915834b64119d418fbe64b8689606f3e6a82b0028972/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-157757",
	                "Source": "/var/lib/docker/volumes/addons-157757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-157757",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-157757",
	                "name.minikube.sigs.k8s.io": "addons-157757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "24f9f463a9fff012c1cfb961381bce8ec9a48745b966f422b3efe884cbc944c7",
	            "SandboxKey": "/var/run/docker/netns/24f9f463a9ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-157757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:12:62:8e:d8:ef",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92834a68276c6c5342b2f9df3efa36d132e762c73aac99bc6267a5b97bc203b4",
	                    "EndpointID": "7ea5c5c63e3c87e871cd536eb1e04e711f8cf768e86fe307a42abdcf0fa7ad62",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-157757",
	                        "44a9a4951ad6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-157757 -n addons-157757
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-157757 logs -n 25: (1.649230716s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-539864                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-539864 │ jenkins │ v1.37.0 │ 01 Oct 25 18:32 UTC │ 01 Oct 25 18:32 UTC │
	│ start   │ --download-only -p binary-mirror-755747 --alsologtostderr --binary-mirror http://127.0.0.1:44469 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-755747   │ jenkins │ v1.37.0 │ 01 Oct 25 18:32 UTC │                     │
	│ delete  │ -p binary-mirror-755747                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-755747   │ jenkins │ v1.37.0 │ 01 Oct 25 18:32 UTC │ 01 Oct 25 18:32 UTC │
	│ addons  │ disable dashboard -p addons-157757                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:32 UTC │                     │
	│ addons  │ enable dashboard -p addons-157757                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:32 UTC │                     │
	│ start   │ -p addons-157757 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:32 UTC │ 01 Oct 25 18:35 UTC │
	│ addons  │ addons-157757 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:35 UTC │ 01 Oct 25 18:35 UTC │
	│ addons  │ addons-157757 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:35 UTC │ 01 Oct 25 18:35 UTC │
	│ addons  │ addons-157757 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:35 UTC │ 01 Oct 25 18:36 UTC │
	│ ip      │ addons-157757 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:36 UTC │ 01 Oct 25 18:36 UTC │
	│ addons  │ addons-157757 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:36 UTC │ 01 Oct 25 18:36 UTC │
	│ addons  │ addons-157757 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:36 UTC │ 01 Oct 25 18:36 UTC │
	│ ssh     │ addons-157757 ssh cat /opt/local-path-provisioner/pvc-e668be2f-67e8-4fa9-99d1-db2c2da3b417_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:36 UTC │ 01 Oct 25 18:36 UTC │
	│ addons  │ addons-157757 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:36 UTC │ 01 Oct 25 18:37 UTC │
	│ addons  │ addons-157757 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:36 UTC │ 01 Oct 25 18:36 UTC │
	│ addons  │ addons-157757 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:36 UTC │ 01 Oct 25 18:36 UTC │
	│ addons  │ addons-157757 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:36 UTC │ 01 Oct 25 18:36 UTC │
	│ addons  │ enable headlamp -p addons-157757 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:36 UTC │ 01 Oct 25 18:36 UTC │
	│ addons  │ addons-157757 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:37 UTC │ 01 Oct 25 18:37 UTC │
	│ addons  │ addons-157757 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:37 UTC │ 01 Oct 25 18:37 UTC │
	│ addons  │ addons-157757 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:37 UTC │ 01 Oct 25 18:37 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-157757                                                                                                                                                                                                                                                                                                                                                                                           │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:37 UTC │ 01 Oct 25 18:37 UTC │
	│ addons  │ addons-157757 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:37 UTC │ 01 Oct 25 18:37 UTC │
	│ ssh     │ addons-157757 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:37 UTC │                     │
	│ ip      │ addons-157757 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-157757          │ jenkins │ v1.37.0 │ 01 Oct 25 18:39 UTC │ 01 Oct 25 18:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/01 18:32:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:32:24.044502  290782 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:32:24.044709  290782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:32:24.044737  290782 out.go:374] Setting ErrFile to fd 2...
	I1001 18:32:24.044756  290782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:32:24.045071  290782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
	I1001 18:32:24.045603  290782 out.go:368] Setting JSON to false
	I1001 18:32:24.046483  290782 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4496,"bootTime":1759339048,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1001 18:32:24.046585  290782 start.go:140] virtualization:  
	I1001 18:32:24.049818  290782 out.go:179] * [addons-157757] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1001 18:32:24.053670  290782 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 18:32:24.053734  290782 notify.go:220] Checking for updates...
	I1001 18:32:24.059577  290782 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:32:24.062573  290782 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	I1001 18:32:24.065476  290782 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	I1001 18:32:24.068509  290782 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 18:32:24.071431  290782 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:32:24.074400  290782 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 18:32:24.099381  290782 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1001 18:32:24.099543  290782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 18:32:24.164926  290782 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-01 18:32:24.154010086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 18:32:24.165046  290782 docker.go:318] overlay module found
	I1001 18:32:24.170044  290782 out.go:179] * Using the docker driver based on user configuration
	I1001 18:32:24.172823  290782 start.go:304] selected driver: docker
	I1001 18:32:24.172846  290782 start.go:921] validating driver "docker" against <nil>
	I1001 18:32:24.172860  290782 start.go:932] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 18:32:24.173629  290782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 18:32:24.227771  290782 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-01 18:32:24.219164423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 18:32:24.227955  290782 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1001 18:32:24.228205  290782 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:32:24.231141  290782 out.go:179] * Using Docker driver with root privileges
	I1001 18:32:24.233907  290782 cni.go:84] Creating CNI manager for ""
	I1001 18:32:24.233976  290782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 18:32:24.233994  290782 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 18:32:24.234075  290782 start.go:348] cluster config:
	{Name:addons-157757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-157757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I1001 18:32:24.237123  290782 out.go:179] * Starting "addons-157757" primary control-plane node in "addons-157757" cluster
	I1001 18:32:24.240000  290782 cache.go:123] Beginning downloading kic base image for docker with crio
	I1001 18:32:24.242850  290782 out.go:179] * Pulling base image v0.0.48 ...
	I1001 18:32:24.245609  290782 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:32:24.245637  290782 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I1001 18:32:24.245667  290782 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21631-288146/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1001 18:32:24.245676  290782 cache.go:58] Caching tarball of preloaded images
	I1001 18:32:24.245762  290782 preload.go:233] Found /home/jenkins/minikube-integration/21631-288146/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1001 18:32:24.245771  290782 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1001 18:32:24.246127  290782 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/config.json ...
	I1001 18:32:24.246148  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/config.json: {Name:mkcdae5cb922dd3f9cd439b6a5a08f52566eec47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:24.260945  290782 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I1001 18:32:24.261071  290782 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I1001 18:32:24.261098  290782 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I1001 18:32:24.261103  290782 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I1001 18:32:24.261111  290782 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I1001 18:32:24.261122  290782 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I1001 18:32:41.913594  290782 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I1001 18:32:41.913635  290782 cache.go:232] Successfully downloaded all kic artifacts
	I1001 18:32:41.913675  290782 start.go:360] acquireMachinesLock for addons-157757: {Name:mkf77f75b25204d0919f6eb9fdfa3f7a6e2f5513 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:32:41.913805  290782 start.go:364] duration metric: took 105.608µs to acquireMachinesLock for "addons-157757"
	I1001 18:32:41.913838  290782 start.go:93] Provisioning new machine with config: &{Name:addons-157757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-157757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:32:41.913917  290782 start.go:125] createHost starting for "" (driver="docker")
	I1001 18:32:41.917307  290782 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1001 18:32:41.917548  290782 start.go:159] libmachine.API.Create for "addons-157757" (driver="docker")
	I1001 18:32:41.917586  290782 client.go:168] LocalClient.Create starting
	I1001 18:32:41.917710  290782 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca.pem
	I1001 18:32:42.600338  290782 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/cert.pem
	I1001 18:32:43.290520  290782 cli_runner.go:164] Run: docker network inspect addons-157757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1001 18:32:43.305072  290782 cli_runner.go:211] docker network inspect addons-157757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1001 18:32:43.305173  290782 network_create.go:284] running [docker network inspect addons-157757] to gather additional debugging logs...
	I1001 18:32:43.305195  290782 cli_runner.go:164] Run: docker network inspect addons-157757
	W1001 18:32:43.321651  290782 cli_runner.go:211] docker network inspect addons-157757 returned with exit code 1
	I1001 18:32:43.321682  290782 network_create.go:287] error running [docker network inspect addons-157757]: docker network inspect addons-157757: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-157757 not found
	I1001 18:32:43.321696  290782 network_create.go:289] output of [docker network inspect addons-157757]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-157757 not found
	
	** /stderr **
	I1001 18:32:43.321790  290782 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 18:32:43.337691  290782 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cbb70}
	I1001 18:32:43.337734  290782 network_create.go:124] attempt to create docker network addons-157757 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1001 18:32:43.337794  290782 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-157757 addons-157757
	I1001 18:32:43.393894  290782 network_create.go:108] docker network addons-157757 192.168.49.0/24 created
	I1001 18:32:43.393942  290782 kic.go:121] calculated static IP "192.168.49.2" for the "addons-157757" container
	I1001 18:32:43.394024  290782 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1001 18:32:43.407609  290782 cli_runner.go:164] Run: docker volume create addons-157757 --label name.minikube.sigs.k8s.io=addons-157757 --label created_by.minikube.sigs.k8s.io=true
	I1001 18:32:43.424747  290782 oci.go:103] Successfully created a docker volume addons-157757
	I1001 18:32:43.424838  290782 cli_runner.go:164] Run: docker run --rm --name addons-157757-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-157757 --entrypoint /usr/bin/test -v addons-157757:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I1001 18:32:45.521375  290782 cli_runner.go:217] Completed: docker run --rm --name addons-157757-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-157757 --entrypoint /usr/bin/test -v addons-157757:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (2.096496988s)
	I1001 18:32:45.521410  290782 oci.go:107] Successfully prepared a docker volume addons-157757
	I1001 18:32:45.521452  290782 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:32:45.521474  290782 kic.go:194] Starting extracting preloaded images to volume ...
	I1001 18:32:45.521550  290782 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21631-288146/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-157757:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1001 18:32:49.811918  290782 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21631-288146/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-157757:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.290324725s)
	I1001 18:32:49.811956  290782 kic.go:203] duration metric: took 4.290478668s to extract preloaded images to volume ...
	W1001 18:32:49.812101  290782 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1001 18:32:49.812214  290782 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1001 18:32:49.864464  290782 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-157757 --name addons-157757 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-157757 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-157757 --network addons-157757 --ip 192.168.49.2 --volume addons-157757:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I1001 18:32:50.168606  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Running}}
	I1001 18:32:50.193533  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:32:50.220887  290782 cli_runner.go:164] Run: docker exec addons-157757 stat /var/lib/dpkg/alternatives/iptables
	I1001 18:32:50.273854  290782 oci.go:144] the created container "addons-157757" has a running status.
	I1001 18:32:50.273883  290782 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa...
	I1001 18:32:50.834401  290782 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1001 18:32:50.858570  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:32:50.884194  290782 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1001 18:32:50.884213  290782 kic_runner.go:114] Args: [docker exec --privileged addons-157757 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1001 18:32:50.939682  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:32:50.959951  290782 machine.go:93] provisionDockerMachine start ...
	I1001 18:32:50.960041  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:32:50.983815  290782 main.go:141] libmachine: Using SSH client type: native
	I1001 18:32:50.984144  290782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I1001 18:32:50.984154  290782 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 18:32:51.138904  290782 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-157757
	
	I1001 18:32:51.138987  290782 ubuntu.go:182] provisioning hostname "addons-157757"
	I1001 18:32:51.139103  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:32:51.160625  290782 main.go:141] libmachine: Using SSH client type: native
	I1001 18:32:51.160952  290782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I1001 18:32:51.160965  290782 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-157757 && echo "addons-157757" | sudo tee /etc/hostname
	I1001 18:32:51.316614  290782 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-157757
	
	I1001 18:32:51.316696  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:32:51.335110  290782 main.go:141] libmachine: Using SSH client type: native
	I1001 18:32:51.335410  290782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I1001 18:32:51.335427  290782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-157757' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-157757/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-157757' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 18:32:51.478962  290782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:32:51.479029  290782 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21631-288146/.minikube CaCertPath:/home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21631-288146/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21631-288146/.minikube}
	I1001 18:32:51.479067  290782 ubuntu.go:190] setting up certificates
	I1001 18:32:51.479078  290782 provision.go:84] configureAuth start
	I1001 18:32:51.479138  290782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-157757
	I1001 18:32:51.496567  290782 provision.go:143] copyHostCerts
	I1001 18:32:51.496652  290782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21631-288146/.minikube/ca.pem (1082 bytes)
	I1001 18:32:51.496799  290782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21631-288146/.minikube/cert.pem (1123 bytes)
	I1001 18:32:51.496877  290782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21631-288146/.minikube/key.pem (1675 bytes)
	I1001 18:32:51.496930  290782 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21631-288146/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca-key.pem org=jenkins.addons-157757 san=[127.0.0.1 192.168.49.2 addons-157757 localhost minikube]
	I1001 18:32:52.318078  290782 provision.go:177] copyRemoteCerts
	I1001 18:32:52.318147  290782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 18:32:52.318187  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:32:52.334568  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:32:52.432934  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 18:32:52.457040  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 18:32:52.480550  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 18:32:52.504556  290782 provision.go:87] duration metric: took 1.025465038s to configureAuth
	I1001 18:32:52.504583  290782 ubuntu.go:206] setting minikube options for container-runtime
	I1001 18:32:52.504786  290782 config.go:182] Loaded profile config "addons-157757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:32:52.504900  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:32:52.521399  290782 main.go:141] libmachine: Using SSH client type: native
	I1001 18:32:52.521709  290782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I1001 18:32:52.521726  290782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 18:32:52.760400  290782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 18:32:52.760471  290782 machine.go:96] duration metric: took 1.800498886s to provisionDockerMachine
	I1001 18:32:52.760497  290782 client.go:171] duration metric: took 10.842899695s to LocalClient.Create
	I1001 18:32:52.760543  290782 start.go:167] duration metric: took 10.842981737s to libmachine.API.Create "addons-157757"
	I1001 18:32:52.760570  290782 start.go:293] postStartSetup for "addons-157757" (driver="docker")
	I1001 18:32:52.760593  290782 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 18:32:52.760705  290782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 18:32:52.760778  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:32:52.781834  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:32:52.879915  290782 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 18:32:52.883002  290782 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1001 18:32:52.883036  290782 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1001 18:32:52.883048  290782 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1001 18:32:52.883056  290782 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1001 18:32:52.883066  290782 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-288146/.minikube/addons for local assets ...
	I1001 18:32:52.883131  290782 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-288146/.minikube/files for local assets ...
	I1001 18:32:52.883158  290782 start.go:296] duration metric: took 122.570069ms for postStartSetup
	I1001 18:32:52.883460  290782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-157757
	I1001 18:32:52.900299  290782 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/config.json ...
	I1001 18:32:52.900595  290782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 18:32:52.900648  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:32:52.916819  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:32:53.011705  290782 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1001 18:32:53.018265  290782 start.go:128] duration metric: took 11.104330405s to createHost
	I1001 18:32:53.018300  290782 start.go:83] releasing machines lock for "addons-157757", held for 11.104479747s
	I1001 18:32:53.018398  290782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-157757
	I1001 18:32:53.036789  290782 ssh_runner.go:195] Run: cat /version.json
	I1001 18:32:53.036856  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:32:53.037117  290782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 18:32:53.037194  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:32:53.055938  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:32:53.069745  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:32:53.154330  290782 ssh_runner.go:195] Run: systemctl --version
	I1001 18:32:53.282625  290782 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 18:32:53.420829  290782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 18:32:53.425253  290782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 18:32:53.449419  290782 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1001 18:32:53.449504  290782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 18:32:53.488842  290782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1001 18:32:53.488870  290782 start.go:495] detecting cgroup driver to use...
	I1001 18:32:53.488905  290782 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1001 18:32:53.488961  290782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 18:32:53.505630  290782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 18:32:53.517918  290782 docker.go:218] disabling cri-docker service (if available) ...
	I1001 18:32:53.517997  290782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 18:32:53.532310  290782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 18:32:53.547659  290782 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 18:32:53.640781  290782 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 18:32:53.728557  290782 docker.go:234] disabling docker service ...
	I1001 18:32:53.728631  290782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 18:32:53.749921  290782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 18:32:53.761953  290782 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 18:32:53.854759  290782 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 18:32:53.952090  290782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 18:32:53.964425  290782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 18:32:53.981414  290782 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1001 18:32:53.981493  290782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:32:53.991755  290782 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 18:32:53.991861  290782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:32:54.002251  290782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:32:54.012026  290782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:32:54.022995  290782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 18:32:54.033659  290782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:32:54.044683  290782 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:32:54.061626  290782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:32:54.072113  290782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 18:32:54.080914  290782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 18:32:54.089712  290782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:32:54.175720  290782 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 18:32:54.290736  290782 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 18:32:54.290895  290782 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 18:32:54.295131  290782 start.go:563] Will wait 60s for crictl version
	I1001 18:32:54.295200  290782 ssh_runner.go:195] Run: which crictl
	I1001 18:32:54.298684  290782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 18:32:54.340331  290782 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1001 18:32:54.340481  290782 ssh_runner.go:195] Run: crio --version
	I1001 18:32:54.379616  290782 ssh_runner.go:195] Run: crio --version
	I1001 18:32:54.421166  290782 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	I1001 18:32:54.423918  290782 cli_runner.go:164] Run: docker network inspect addons-157757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 18:32:54.440073  290782 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1001 18:32:54.443706  290782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:32:54.454617  290782 kubeadm.go:875] updating cluster {Name:addons-157757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-157757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 18:32:54.454725  290782 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:32:54.454843  290782 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:32:54.532929  290782 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 18:32:54.532952  290782 crio.go:433] Images already preloaded, skipping extraction
	I1001 18:32:54.533009  290782 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:32:54.569121  290782 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 18:32:54.569147  290782 cache_images.go:85] Images are preloaded, skipping loading
	I1001 18:32:54.569157  290782 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1001 18:32:54.569245  290782 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-157757 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-157757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 18:32:54.569333  290782 ssh_runner.go:195] Run: crio config
	I1001 18:32:54.624710  290782 cni.go:84] Creating CNI manager for ""
	I1001 18:32:54.624731  290782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 18:32:54.624741  290782 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 18:32:54.624765  290782 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-157757 NodeName:addons-157757 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 18:32:54.624902  290782 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-157757"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 18:32:54.624977  290782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1001 18:32:54.634049  290782 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 18:32:54.634128  290782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 18:32:54.642814  290782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1001 18:32:54.661077  290782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 18:32:54.679946  290782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1001 18:32:54.698327  290782 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1001 18:32:54.701880  290782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 18:32:54.712901  290782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:32:54.810579  290782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:32:54.825302  290782 certs.go:68] Setting up /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757 for IP: 192.168.49.2
	I1001 18:32:54.825383  290782 certs.go:194] generating shared ca certs ...
	I1001 18:32:54.825417  290782 certs.go:226] acquiring lock for ca certs: {Name:mke2b4e9b838c885b8b094f221acc5151872bc25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:54.825626  290782 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21631-288146/.minikube/ca.key
	I1001 18:32:56.174807  290782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-288146/.minikube/ca.crt ...
	I1001 18:32:56.174842  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/.minikube/ca.crt: {Name:mk83901f2e4ceb9b96728c3b9c2fc78455f1df28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:56.175100  290782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-288146/.minikube/ca.key ...
	I1001 18:32:56.175126  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/.minikube/ca.key: {Name:mk2b6f06b2f68d6948500f9fe19b2ae4a988bf5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:56.175231  290782 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21631-288146/.minikube/proxy-client-ca.key
	I1001 18:32:57.206213  290782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-288146/.minikube/proxy-client-ca.crt ...
	I1001 18:32:57.206246  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/.minikube/proxy-client-ca.crt: {Name:mke3c8b99213d449907bd145709389d4232237c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:57.206447  290782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-288146/.minikube/proxy-client-ca.key ...
	I1001 18:32:57.206461  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/.minikube/proxy-client-ca.key: {Name:mk0d3608337243e272284aa212d1290dddf1b507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:57.206551  290782 certs.go:256] generating profile certs ...
	I1001 18:32:57.206614  290782 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.key
	I1001 18:32:57.206632  290782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt with IP's: []
	I1001 18:32:58.376184  290782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt ...
	I1001 18:32:58.376216  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: {Name:mk732e71a13971bc916b46d98fdb2f384b1c831b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:58.376403  290782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.key ...
	I1001 18:32:58.376417  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.key: {Name:mk31dd0610800980addaed81f3fae0bf6f571112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:58.376504  290782 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.key.133da081
	I1001 18:32:58.376523  290782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.crt.133da081 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1001 18:32:59.072126  290782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.crt.133da081 ...
	I1001 18:32:59.072164  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.crt.133da081: {Name:mkbf6661af8612da0a24ab9363ca2976ade6ed55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:59.072353  290782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.key.133da081 ...
	I1001 18:32:59.072370  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.key.133da081: {Name:mk5ef1af8a29ba76bb40c9a237568dc59a7e89ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:59.072463  290782 certs.go:381] copying /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.crt.133da081 -> /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.crt
	I1001 18:32:59.072542  290782 certs.go:385] copying /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.key.133da081 -> /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.key
	I1001 18:32:59.072602  290782 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/proxy-client.key
	I1001 18:32:59.072632  290782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/proxy-client.crt with IP's: []
	I1001 18:32:59.319364  290782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/proxy-client.crt ...
	I1001 18:32:59.319395  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/proxy-client.crt: {Name:mkf0be95572f07f66dc676a80e87e057080519f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:59.319573  290782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/proxy-client.key ...
	I1001 18:32:59.319587  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/proxy-client.key: {Name:mk0631af29d0c7b3487b4003843e81d4d171b73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:32:59.319781  290782 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 18:32:59.319822  290782 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca.pem (1082 bytes)
	I1001 18:32:59.319851  290782 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/cert.pem (1123 bytes)
	I1001 18:32:59.319879  290782 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/key.pem (1675 bytes)
	I1001 18:32:59.320427  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 18:32:59.345944  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1001 18:32:59.372408  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 18:32:59.406907  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 18:32:59.432395  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 18:32:59.456941  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 18:32:59.481962  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 18:32:59.506824  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 18:32:59.531983  290782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 18:32:59.556374  290782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 18:32:59.574747  290782 ssh_runner.go:195] Run: openssl version
	I1001 18:32:59.580319  290782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 18:32:59.589835  290782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:32:59.593465  290782 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:32:59.593532  290782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:32:59.600819  290782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 18:32:59.610699  290782 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 18:32:59.613978  290782 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 18:32:59.614058  290782 kubeadm.go:392] StartCluster: {Name:addons-157757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-157757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:32:59.614144  290782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 18:32:59.614205  290782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 18:32:59.651968  290782 cri.go:89] found id: ""
	I1001 18:32:59.652054  290782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 18:32:59.661362  290782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 18:32:59.670417  290782 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1001 18:32:59.670518  290782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 18:32:59.679538  290782 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 18:32:59.679560  290782 kubeadm.go:157] found existing configuration files:
	
	I1001 18:32:59.679635  290782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 18:32:59.688741  290782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 18:32:59.688814  290782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 18:32:59.697641  290782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 18:32:59.706467  290782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 18:32:59.706555  290782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 18:32:59.715434  290782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 18:32:59.725030  290782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 18:32:59.725101  290782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 18:32:59.734371  290782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 18:32:59.743667  290782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 18:32:59.743760  290782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 18:32:59.753011  290782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1001 18:32:59.795881  290782 kubeadm.go:310] [init] Using Kubernetes version: v1.34.1
	I1001 18:32:59.796237  290782 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 18:32:59.812525  290782 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1001 18:32:59.812612  290782 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I1001 18:32:59.812656  290782 kubeadm.go:310] OS: Linux
	I1001 18:32:59.812714  290782 kubeadm.go:310] CGROUPS_CPU: enabled
	I1001 18:32:59.812767  290782 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1001 18:32:59.812818  290782 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1001 18:32:59.812870  290782 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1001 18:32:59.812922  290782 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1001 18:32:59.812974  290782 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1001 18:32:59.813023  290782 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1001 18:32:59.813077  290782 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1001 18:32:59.813127  290782 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1001 18:32:59.869922  290782 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 18:32:59.870036  290782 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 18:32:59.870131  290782 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 18:32:59.877014  290782 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 18:32:59.881125  290782 out.go:252]   - Generating certificates and keys ...
	I1001 18:32:59.881243  290782 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 18:32:59.881327  290782 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 18:33:00.116821  290782 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 18:33:00.830255  290782 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 18:33:01.179259  290782 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 18:33:02.318655  290782 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 18:33:02.779980  290782 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 18:33:02.780344  290782 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-157757 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 18:33:04.004474  290782 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 18:33:04.004909  290782 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-157757 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 18:33:04.549048  290782 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 18:33:05.040367  290782 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 18:33:05.963356  290782 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 18:33:05.963615  290782 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 18:33:06.703948  290782 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 18:33:07.324392  290782 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 18:33:07.565954  290782 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 18:33:07.662419  290782 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 18:33:07.874171  290782 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 18:33:07.875208  290782 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 18:33:07.880368  290782 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 18:33:07.883731  290782 out.go:252]   - Booting up control plane ...
	I1001 18:33:07.883844  290782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 18:33:07.883927  290782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 18:33:07.884772  290782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 18:33:07.895466  290782 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 18:33:07.895799  290782 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1001 18:33:07.902543  290782 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1001 18:33:07.902962  290782 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 18:33:07.903354  290782 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 18:33:08.004001  290782 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 18:33:08.004121  290782 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 18:33:08.505793  290782 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.876429ms
	I1001 18:33:08.509113  290782 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1001 18:33:08.509212  290782 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1001 18:33:08.509542  290782 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1001 18:33:08.509642  290782 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1001 18:33:11.552125  290782 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.042640252s
	I1001 18:33:14.030915  290782 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.521752596s
	I1001 18:33:15.014316  290782 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.503060949s
	I1001 18:33:15.060006  290782 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 18:33:15.082610  290782 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 18:33:15.101454  290782 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 18:33:15.101665  290782 kubeadm.go:310] [mark-control-plane] Marking the node addons-157757 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 18:33:15.116655  290782 kubeadm.go:310] [bootstrap-token] Using token: zzbv66.kqwy21rag00wa2ya
	I1001 18:33:15.119713  290782 out.go:252]   - Configuring RBAC rules ...
	I1001 18:33:15.119869  290782 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 18:33:15.126533  290782 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 18:33:15.137799  290782 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 18:33:15.150842  290782 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 18:33:15.160572  290782 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 18:33:15.165714  290782 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 18:33:15.420490  290782 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 18:33:15.851232  290782 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 18:33:16.420222  290782 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 18:33:16.420244  290782 kubeadm.go:310] 
	I1001 18:33:16.420304  290782 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 18:33:16.420314  290782 kubeadm.go:310] 
	I1001 18:33:16.420399  290782 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 18:33:16.420405  290782 kubeadm.go:310] 
	I1001 18:33:16.420430  290782 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 18:33:16.420495  290782 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 18:33:16.420552  290782 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 18:33:16.420557  290782 kubeadm.go:310] 
	I1001 18:33:16.420611  290782 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 18:33:16.420626  290782 kubeadm.go:310] 
	I1001 18:33:16.420674  290782 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 18:33:16.420679  290782 kubeadm.go:310] 
	I1001 18:33:16.420735  290782 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 18:33:16.420810  290782 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 18:33:16.420878  290782 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 18:33:16.420886  290782 kubeadm.go:310] 
	I1001 18:33:16.420974  290782 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 18:33:16.421052  290782 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 18:33:16.421056  290782 kubeadm.go:310] 
	I1001 18:33:16.421140  290782 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zzbv66.kqwy21rag00wa2ya \
	I1001 18:33:16.421242  290782 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a44e93f0a40bfc045b30d69bbccb190a595a084f42f229c171fa901161d14d2d \
	I1001 18:33:16.421262  290782 kubeadm.go:310] 	--control-plane 
	I1001 18:33:16.421267  290782 kubeadm.go:310] 
	I1001 18:33:16.421351  290782 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 18:33:16.421356  290782 kubeadm.go:310] 
	I1001 18:33:16.421437  290782 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zzbv66.kqwy21rag00wa2ya \
	I1001 18:33:16.421538  290782 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a44e93f0a40bfc045b30d69bbccb190a595a084f42f229c171fa901161d14d2d 
	I1001 18:33:16.425599  290782 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1001 18:33:16.425840  290782 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1001 18:33:16.425949  290782 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 18:33:16.425965  290782 cni.go:84] Creating CNI manager for ""
	I1001 18:33:16.425972  290782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 18:33:16.431063  290782 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1001 18:33:16.434053  290782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 18:33:16.437928  290782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1001 18:33:16.437948  290782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 18:33:16.456994  290782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 18:33:16.755261  290782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 18:33:16.755406  290782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:33:16.755406  290782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-157757 minikube.k8s.io/updated_at=2025_10_01T18_33_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=de12e0f54d226aca16c1f78311795f5ec99c1492 minikube.k8s.io/name=addons-157757 minikube.k8s.io/primary=true
	I1001 18:33:16.943422  290782 ops.go:34] apiserver oom_adj: -16
	I1001 18:33:16.943535  290782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:33:17.443970  290782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:33:17.943668  290782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:33:18.444491  290782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:33:18.943658  290782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:33:19.443659  290782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:33:19.943788  290782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:33:20.444365  290782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 18:33:20.532486  290782 kubeadm.go:1105] duration metric: took 3.777151329s to wait for elevateKubeSystemPrivileges
	I1001 18:33:20.532521  290782 kubeadm.go:394] duration metric: took 20.918494125s to StartCluster
	I1001 18:33:20.532538  290782 settings.go:142] acquiring lock: {Name:mkd3d3b21fb3f2e0bfee200edb8bfa6f57a6455f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:33:20.532655  290782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21631-288146/kubeconfig
	I1001 18:33:20.533104  290782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/kubeconfig: {Name:mkf64803b00ff38d43d452cf5741b7023d24d24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:33:20.533303  290782 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:33:20.533440  290782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 18:33:20.533685  290782 config.go:182] Loaded profile config "addons-157757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:33:20.533718  290782 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 18:33:20.533797  290782 addons.go:69] Setting yakd=true in profile "addons-157757"
	I1001 18:33:20.533816  290782 addons.go:238] Setting addon yakd=true in "addons-157757"
	I1001 18:33:20.533836  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.534312  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.534912  290782 addons.go:69] Setting inspektor-gadget=true in profile "addons-157757"
	I1001 18:33:20.534932  290782 addons.go:238] Setting addon inspektor-gadget=true in "addons-157757"
	I1001 18:33:20.534976  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.535408  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.535695  290782 addons.go:69] Setting metrics-server=true in profile "addons-157757"
	I1001 18:33:20.535720  290782 addons.go:238] Setting addon metrics-server=true in "addons-157757"
	I1001 18:33:20.535760  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.536170  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.536433  290782 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-157757"
	I1001 18:33:20.536457  290782 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-157757"
	I1001 18:33:20.536480  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.538552  290782 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-157757"
	I1001 18:33:20.538581  290782 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-157757"
	I1001 18:33:20.538613  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.539158  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.547088  290782 addons.go:69] Setting registry=true in profile "addons-157757"
	I1001 18:33:20.547124  290782 addons.go:238] Setting addon registry=true in "addons-157757"
	I1001 18:33:20.547162  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.547619  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.547808  290782 addons.go:69] Setting cloud-spanner=true in profile "addons-157757"
	I1001 18:33:20.547836  290782 addons.go:238] Setting addon cloud-spanner=true in "addons-157757"
	I1001 18:33:20.547872  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.548287  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.557808  290782 addons.go:69] Setting registry-creds=true in profile "addons-157757"
	I1001 18:33:20.557849  290782 addons.go:238] Setting addon registry-creds=true in "addons-157757"
	I1001 18:33:20.557895  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.558361  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.573207  290782 addons.go:69] Setting storage-provisioner=true in profile "addons-157757"
	I1001 18:33:20.573245  290782 addons.go:238] Setting addon storage-provisioner=true in "addons-157757"
	I1001 18:33:20.573291  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.573532  290782 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-157757"
	I1001 18:33:20.573600  290782 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-157757"
	I1001 18:33:20.573641  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.573747  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.574095  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.594843  290782 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-157757"
	I1001 18:33:20.594885  290782 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-157757"
	I1001 18:33:20.595221  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.595382  290782 addons.go:69] Setting default-storageclass=true in profile "addons-157757"
	I1001 18:33:20.595398  290782 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-157757"
	I1001 18:33:20.595635  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.623507  290782 addons.go:69] Setting volcano=true in profile "addons-157757"
	I1001 18:33:20.623552  290782 addons.go:238] Setting addon volcano=true in "addons-157757"
	I1001 18:33:20.623588  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.624051  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.629899  290782 addons.go:69] Setting gcp-auth=true in profile "addons-157757"
	I1001 18:33:20.629999  290782 mustload.go:65] Loading cluster: addons-157757
	I1001 18:33:20.630246  290782 config.go:182] Loaded profile config "addons-157757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:33:20.630602  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.648447  290782 addons.go:69] Setting volumesnapshots=true in profile "addons-157757"
	I1001 18:33:20.648486  290782 addons.go:238] Setting addon volumesnapshots=true in "addons-157757"
	I1001 18:33:20.648520  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.648988  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.649672  290782 addons.go:69] Setting ingress=true in profile "addons-157757"
	I1001 18:33:20.649694  290782 addons.go:238] Setting addon ingress=true in "addons-157757"
	I1001 18:33:20.649735  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.650146  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.672141  290782 addons.go:69] Setting ingress-dns=true in profile "addons-157757"
	I1001 18:33:20.672176  290782 addons.go:238] Setting addon ingress-dns=true in "addons-157757"
	I1001 18:33:20.672223  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.672709  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.674946  290782 out.go:179] * Verifying Kubernetes components...
	I1001 18:33:20.677957  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.715805  290782 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1001 18:33:20.719463  290782 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 18:33:20.719534  290782 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1001 18:33:20.719649  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.730314  290782 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1001 18:33:20.733240  290782 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 18:33:20.733318  290782 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 18:33:20.733427  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.750018  290782 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-157757"
	I1001 18:33:20.750072  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.750491  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.751662  290782 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1001 18:33:20.754582  290782 out.go:179]   - Using image docker.io/registry:3.0.0
	I1001 18:33:20.757635  290782 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 18:33:20.757712  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 18:33:20.757824  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.763563  290782 addons.go:238] Setting addon default-storageclass=true in "addons-157757"
	I1001 18:33:20.763604  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.764001  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:20.769278  290782 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1001 18:33:20.772087  290782 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1001 18:33:20.772124  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 18:33:20.772206  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.787846  290782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:33:20.790899  290782 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1001 18:33:20.793820  290782 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 18:33:20.793840  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 18:33:20.793914  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.808156  290782 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 18:33:20.818163  290782 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1001 18:33:20.820493  290782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1001 18:33:20.820575  290782 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1001 18:33:20.859632  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:20.865786  290782 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1001 18:33:20.867478  290782 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 18:33:20.867501  290782 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 18:33:20.867551  290782 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 18:33:20.867567  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.867574  290782 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1001 18:33:20.867635  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:20.888911  290782 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:33:20.888932  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 18:33:20.888996  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.868288  290782 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 18:33:20.868496  290782 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 18:33:20.910282  290782 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1001 18:33:20.913249  290782 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1001 18:33:20.913274  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1001 18:33:20.913347  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.895494  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1001 18:33:20.913716  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.923417  290782 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1001 18:33:20.926339  290782 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1001 18:33:20.895629  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:20.895682  290782 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1001 18:33:20.929126  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1001 18:33:20.929325  290782 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 18:33:20.929337  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 18:33:20.929399  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.929591  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.941994  290782 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 18:33:20.942017  290782 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 18:33:20.942089  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.945723  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:20.946515  290782 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 18:33:20.958967  290782 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 18:33:20.962014  290782 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 18:33:20.962289  290782 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 18:33:20.962307  290782 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 18:33:20.962364  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:20.988683  290782 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 18:33:20.994544  290782 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 18:33:20.997513  290782 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 18:33:20.998839  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:20.999744  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:21.003438  290782 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 18:33:21.006274  290782 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 18:33:21.009304  290782 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 18:33:21.009327  290782 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 18:33:21.009406  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:21.044034  290782 out.go:179]   - Using image docker.io/busybox:stable
	I1001 18:33:21.051231  290782 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 18:33:21.056960  290782 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 18:33:21.057002  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 18:33:21.057079  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:21.083779  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:21.093224  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:21.104464  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:21.124118  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:21.151224  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:21.152151  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:21.156422  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:21.174905  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:21.188437  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:21.190916  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	W1001 18:33:21.192428  290782 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1001 18:33:21.192465  290782 retry.go:31] will retry after 201.58102ms: ssh: handshake failed: EOF
	I1001 18:33:21.386522  290782 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:33:21.386588  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1001 18:33:21.486828  290782 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 18:33:21.486850  290782 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 18:33:21.490559  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 18:33:21.514506  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 18:33:21.550548  290782 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 18:33:21.550620  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 18:33:21.566118  290782 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 18:33:21.566193  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 18:33:21.570590  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 18:33:21.618357  290782 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 18:33:21.618431  290782 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 18:33:21.622608  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:33:21.647905  290782 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 18:33:21.647977  290782 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 18:33:21.673490  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 18:33:21.677741  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1001 18:33:21.681278  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 18:33:21.705327  290782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:33:21.714208  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 18:33:21.731087  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:33:21.747096  290782 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 18:33:21.747177  290782 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 18:33:21.751058  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1001 18:33:21.756769  290782 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 18:33:21.756849  290782 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 18:33:21.761541  290782 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 18:33:21.761620  290782 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 18:33:21.771996  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 18:33:21.832241  290782 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 18:33:21.832322  290782 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 18:33:21.912075  290782 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 18:33:21.912155  290782 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 18:33:21.943746  290782 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 18:33:21.943822  290782 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 18:33:22.006124  290782 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 18:33:22.006199  290782 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 18:33:22.030631  290782 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 18:33:22.030711  290782 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 18:33:22.036840  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 18:33:22.099828  290782 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 18:33:22.099910  290782 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 18:33:22.180417  290782 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 18:33:22.180489  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 18:33:22.239028  290782 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 18:33:22.239105  290782 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 18:33:22.308296  290782 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 18:33:22.308376  290782 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 18:33:22.350083  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 18:33:22.388813  290782 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 18:33:22.388897  290782 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 18:33:22.412827  290782 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 18:33:22.412900  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 18:33:22.516836  290782 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 18:33:22.522860  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 18:33:22.529509  290782 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 18:33:22.529585  290782 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 18:33:22.565061  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 18:33:22.597046  290782 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 18:33:22.597122  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 18:33:22.686383  290782 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 18:33:22.686448  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 18:33:22.820997  290782 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 18:33:22.821072  290782 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 18:33:22.940988  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 18:33:24.348689  290782 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.528160366s)
	I1001 18:33:24.348770  290782 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1001 18:33:24.899490  290782 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-157757" context rescaled to 1 replicas
	I1001 18:33:26.494882  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.004287836s)
	I1001 18:33:26.495503  290782 addons.go:479] Verifying addon ingress=true in "addons-157757"
	I1001 18:33:26.495016  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.924357224s)
	I1001 18:33:26.495146  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.872466297s)
	W1001 18:33:26.495851  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:26.495868  290782 retry.go:31] will retry after 254.709937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:26.495176  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.821611989s)
	I1001 18:33:26.495891  290782 addons.go:479] Verifying addon registry=true in "addons-157757"
	I1001 18:33:26.495195  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.817384842s)
	I1001 18:33:26.495213  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.813846835s)
	I1001 18:33:26.495225  290782 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.789830817s)
	I1001 18:33:26.497222  290782 node_ready.go:35] waiting up to 6m0s for node "addons-157757" to be "Ready" ...
	I1001 18:33:26.495256  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.780978633s)
	I1001 18:33:26.495293  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.764148054s)
	I1001 18:33:26.495316  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.744200241s)
	I1001 18:33:26.495361  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.72329467s)
	I1001 18:33:26.495410  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.458493322s)
	I1001 18:33:26.497655  290782 addons.go:479] Verifying addon metrics-server=true in "addons-157757"
	I1001 18:33:26.495440  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.145284775s)
	I1001 18:33:26.494985  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.980404126s)
	I1001 18:33:26.500136  290782 out.go:179] * Verifying ingress addon...
	I1001 18:33:26.502454  290782 out.go:179] * Verifying registry addon...
	I1001 18:33:26.502546  290782 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-157757 service yakd-dashboard -n yakd-dashboard
	
	I1001 18:33:26.506073  290782 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 18:33:26.506973  290782 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 18:33:26.519272  290782 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 18:33:26.519298  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:26.524661  290782 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 18:33:26.524728  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1001 18:33:26.528097  290782 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1001 18:33:26.568988  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.003834484s)
	W1001 18:33:26.569030  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 18:33:26.569051  290782 retry.go:31] will retry after 129.127717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 18:33:26.698805  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 18:33:26.751659  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:33:27.237004  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:27.237109  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:27.257716  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.316624389s)
	I1001 18:33:27.257799  290782 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-157757"
	I1001 18:33:27.261087  290782 out.go:179] * Verifying csi-hostpath-driver addon...
	I1001 18:33:27.264726  290782 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 18:33:27.305843  290782 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 18:33:27.305917  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:27.513499  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:27.513992  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:27.769197  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:28.010691  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:28.010767  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:28.268231  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:28.500337  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:28.510058  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:28.510267  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:28.769078  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:29.010271  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:29.010523  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:29.268822  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:29.510935  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:29.511292  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:29.728105  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.029249071s)
	I1001 18:33:29.728256  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.976512448s)
	W1001 18:33:29.728287  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:29.728307  290782 retry.go:31] will retry after 229.951155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:29.768524  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:29.959103  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:33:30.010753  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:30.010966  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:30.268920  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:30.513013  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:30.520902  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:30.521451  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:30.771739  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:30.814606  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:30.814687  290782 retry.go:31] will retry after 747.023778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:31.010344  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:31.010718  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:31.269761  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:31.511633  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:31.511840  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:31.562185  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:33:31.628492  290782 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 18:33:31.628587  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:31.652362  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:31.771122  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:31.812522  290782 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 18:33:31.840200  290782 addons.go:238] Setting addon gcp-auth=true in "addons-157757"
	I1001 18:33:31.840249  290782 host.go:66] Checking if "addons-157757" exists ...
	I1001 18:33:31.840685  290782 cli_runner.go:164] Run: docker container inspect addons-157757 --format={{.State.Status}}
	I1001 18:33:31.858487  290782 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 18:33:31.858542  290782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-157757
	I1001 18:33:31.878743  290782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/addons-157757/id_rsa Username:docker}
	I1001 18:33:32.010559  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:32.011706  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:32.268455  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:32.433928  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:32.433967  290782 retry.go:31] will retry after 857.517888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:32.437450  290782 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1001 18:33:32.440163  290782 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1001 18:33:32.442977  290782 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 18:33:32.443005  290782 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 18:33:32.460818  290782 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 18:33:32.460839  290782 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 18:33:32.478949  290782 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 18:33:32.478973  290782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 18:33:32.497580  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 18:33:32.510820  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:32.511009  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:32.768404  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:32.998147  290782 addons.go:479] Verifying addon gcp-auth=true in "addons-157757"
	I1001 18:33:33.001246  290782 out.go:179] * Verifying gcp-auth addon...
	I1001 18:33:33.004915  290782 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	W1001 18:33:33.023841  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:33.028650  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:33.028774  290782 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 18:33:33.028803  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:33.029055  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:33.268551  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:33.292607  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:33:33.511188  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:33.512805  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:33.513867  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:33.769475  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:34.012203  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:34.012431  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:34.013376  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1001 18:33:34.105504  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:34.105535  290782 retry.go:31] will retry after 1.100757854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:34.268454  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:34.508156  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:34.509912  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:34.510074  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:34.769976  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:35.010082  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:35.010348  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:35.010477  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:35.206863  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:33:35.268664  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:35.500829  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:35.511128  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:35.511292  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:35.511374  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:35.767981  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:36.010995  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:36.011287  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:36.012610  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1001 18:33:36.020202  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:36.020335  290782 retry.go:31] will retry after 1.374202861s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:36.268126  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:36.510189  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:36.510188  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:36.510232  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:36.767930  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:37.008929  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:37.009897  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:37.011230  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:37.268119  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:37.395483  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:33:37.511421  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:37.511539  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:37.511696  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:37.769628  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:38.000673  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:38.012928  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:38.013872  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:38.015403  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1001 18:33:38.201115  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:38.201193  290782 retry.go:31] will retry after 2.173645494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:38.267940  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:38.507584  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:38.509719  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:38.509837  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:38.767811  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:39.008546  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:39.010437  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:39.010536  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:39.267964  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:39.509101  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:39.509518  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:39.509991  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:39.768642  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:40.010051  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:40.010104  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:40.011305  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:40.268420  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:40.375795  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1001 18:33:40.501341  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:40.510886  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:40.512275  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:40.513218  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:40.768887  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:41.013612  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:41.013986  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:41.014630  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1001 18:33:41.209423  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:41.209462  290782 retry.go:31] will retry after 5.378366898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:41.268466  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:41.508707  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:41.513100  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:41.514905  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:41.768398  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:42.009486  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:42.009601  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:42.010064  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:42.268322  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:42.508637  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:42.510573  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:42.510589  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:42.767614  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:43.000749  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:43.009427  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:43.009512  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:43.010227  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:43.268214  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:43.509547  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:43.509683  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:43.512064  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:43.768046  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:44.007735  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:44.009839  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:44.010336  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:44.267644  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:44.508668  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:44.510057  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:44.510518  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:44.770053  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:45.000869  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:45.010299  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:45.010387  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:45.010518  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:45.269665  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:45.508019  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:45.510322  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:45.510338  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:45.768476  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:46.013060  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:46.013200  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:46.013518  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:46.268746  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:46.510687  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:46.513795  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:46.514716  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:46.587986  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:33:46.768702  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:47.001361  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:47.011840  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:47.012280  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:47.013075  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:47.268955  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:47.391569  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:47.391602  290782 retry.go:31] will retry after 7.999287173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:47.509584  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:47.509673  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:47.511392  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:47.768668  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:48.008926  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:48.009865  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:48.011973  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:48.268097  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:48.508439  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:48.509193  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:48.510523  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:48.767595  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:49.008444  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:49.010581  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:49.010876  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:49.267814  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:49.500939  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:49.507907  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:49.509948  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:49.510028  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:49.770534  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:50.007748  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:50.008951  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:50.009847  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:50.267827  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:50.507377  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:50.509172  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:50.510367  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:50.768725  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:51.007669  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:51.009794  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:51.009938  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:51.267861  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:51.501012  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:51.508862  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:51.509157  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:51.510056  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:51.768134  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:52.009877  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:52.010100  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:52.010192  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:52.268330  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:52.507954  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:52.509323  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:52.509675  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:52.767569  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:53.007932  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:53.009913  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:53.010374  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:53.268791  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:53.509853  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:53.509864  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:53.510290  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:53.768088  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:54.000866  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:54.007619  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:54.009974  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:54.010083  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:54.268678  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:54.508238  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:54.509409  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:54.510259  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:54.768879  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:55.007468  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:55.009485  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:55.009975  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:55.268414  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:55.391824  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:33:55.511020  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:55.511529  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:55.513107  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:55.768126  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:56.009264  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:56.011013  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:56.011596  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1001 18:33:56.212727  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:56.212758  290782 retry.go:31] will retry after 7.19466706s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:33:56.267494  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:56.500619  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:56.509223  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:56.509304  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:56.511216  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:56.768740  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:57.009821  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:57.009888  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:57.009998  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:57.268085  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:57.508758  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:57.509644  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:57.510508  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:57.769117  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:58.008118  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:58.009230  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:58.009717  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:58.267515  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:58.509957  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:58.511309  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:58.511864  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:58.768192  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:33:59.000136  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:33:59.008245  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:59.009680  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:59.010829  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:59.267690  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:33:59.509485  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:33:59.509613  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:33:59.511233  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:33:59.768097  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:00.009332  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:00.010267  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:00.010457  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:00.298159  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:00.508849  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:00.511688  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:00.511895  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:00.768193  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:34:01.001078  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:34:01.007691  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:01.010485  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:01.010896  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:01.267754  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:01.507914  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:01.510283  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:01.511139  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:01.768342  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:02.007868  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:02.008900  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:02.010362  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:02.268654  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:02.510620  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:02.511557  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:02.511798  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:02.768143  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:03.008781  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:03.010595  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:03.010966  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:03.268444  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:03.407612  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1001 18:34:03.501090  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:34:03.509665  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:03.510080  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:03.510858  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:03.768151  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:04.009264  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:04.010171  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:04.012311  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1001 18:34:04.212881  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:34:04.212916  290782 retry.go:31] will retry after 17.799445132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:34:04.268008  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:04.509949  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:04.509966  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:04.511202  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:04.769095  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:05.009525  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:05.009789  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:05.010526  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:05.267641  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:05.509937  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:05.510507  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:05.510522  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:05.770498  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1001 18:34:06.000435  290782 node_ready.go:57] node "addons-157757" has "Ready":"False" status (will retry)
	I1001 18:34:06.008693  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:06.009754  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:06.011013  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:06.267857  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:06.503221  290782 node_ready.go:49] node "addons-157757" is "Ready"
	I1001 18:34:06.503301  290782 node_ready.go:38] duration metric: took 40.00603197s for node "addons-157757" to be "Ready" ...
	I1001 18:34:06.503339  290782 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:34:06.503438  290782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:34:06.518309  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:06.519589  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:06.519987  290782 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 18:34:06.520037  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:06.526318  290782 api_server.go:72] duration metric: took 45.992980297s to wait for apiserver process to appear ...
	I1001 18:34:06.526388  290782 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:34:06.526436  290782 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1001 18:34:06.537843  290782 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1001 18:34:06.539078  290782 api_server.go:141] control plane version: v1.34.1
	I1001 18:34:06.539142  290782 api_server.go:131] duration metric: took 12.719124ms to wait for apiserver health ...
	I1001 18:34:06.539181  290782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:34:06.549231  290782 system_pods.go:59] 19 kube-system pods found
	I1001 18:34:06.549319  290782 system_pods.go:61] "coredns-66bc5c9577-84mdw" [c8996410-b266-4fca-9f7e-06a6ab0f1271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:34:06.549341  290782 system_pods.go:61] "csi-hostpath-attacher-0" [c8c0bfca-689e-4993-9977-4444fe25d48a] Pending
	I1001 18:34:06.549378  290782 system_pods.go:61] "csi-hostpath-resizer-0" [b15f8917-b254-46e5-85c6-f72b6ba37bd2] Pending
	I1001 18:34:06.549404  290782 system_pods.go:61] "csi-hostpathplugin-4hsdq" [55e68c17-2b31-4b0f-86f1-5cd9a123a80b] Pending
	I1001 18:34:06.549425  290782 system_pods.go:61] "etcd-addons-157757" [ecdb8cd7-3935-4411-be75-56ef1ac27649] Running
	I1001 18:34:06.549462  290782 system_pods.go:61] "kindnet-gqwn9" [b7538297-42dc-48c5-8a86-cd0cf0909585] Running
	I1001 18:34:06.549489  290782 system_pods.go:61] "kube-apiserver-addons-157757" [2d04f5dd-750b-4d7c-b9c0-babc15ec8183] Running
	I1001 18:34:06.549510  290782 system_pods.go:61] "kube-controller-manager-addons-157757" [0905c745-901d-4bf6-a888-5f3865889026] Running
	I1001 18:34:06.549548  290782 system_pods.go:61] "kube-ingress-dns-minikube" [e1499a85-a8a9-4dcf-a783-1552ed1e2df4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:34:06.549580  290782 system_pods.go:61] "kube-proxy-cd2p6" [b94bcb0f-03c5-4fc9-9a07-1621a1d154ba] Running
	I1001 18:34:06.549601  290782 system_pods.go:61] "kube-scheduler-addons-157757" [9c5fecc7-a875-413d-930f-3f1867a74d0d] Running
	I1001 18:34:06.549635  290782 system_pods.go:61] "metrics-server-85b7d694d7-tj2w5" [d2bbd030-0a2c-40dd-8cf1-66db6f6d2ca4] Pending
	I1001 18:34:06.549660  290782 system_pods.go:61] "nvidia-device-plugin-daemonset-ltnvc" [70d51da8-b222-413a-b7e3-18518bfefd2a] Pending
	I1001 18:34:06.549684  290782 system_pods.go:61] "registry-66898fdd98-hwz6l" [e083da18-8bb1-4b19-a43f-e6ec60f32ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:34:06.549719  290782 system_pods.go:61] "registry-creds-764b6fb674-kq6qc" [42e70c7d-aed4-4667-97f1-51cfa662bbe9] Pending
	I1001 18:34:06.549747  290782 system_pods.go:61] "registry-proxy-9fbtr" [fffe62e8-26fb-42b2-bf14-e16fe97877ea] Pending
	I1001 18:34:06.549768  290782 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b6nm7" [89ef13f5-9468-4cd8-9391-f066d017eeee] Pending
	I1001 18:34:06.549805  290782 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fs5sl" [f0534c80-d0f4-461e-9eed-bc4bb58cc3f0] Pending
	I1001 18:34:06.549829  290782 system_pods.go:61] "storage-provisioner" [93f5a125-845d-4819-a87c-4aa4f8bca761] Pending
	I1001 18:34:06.549850  290782 system_pods.go:74] duration metric: took 10.650166ms to wait for pod list to return data ...
	I1001 18:34:06.549885  290782 default_sa.go:34] waiting for default service account to be created ...
	I1001 18:34:06.558843  290782 default_sa.go:45] found service account: "default"
	I1001 18:34:06.558915  290782 default_sa.go:55] duration metric: took 9.007088ms for default service account to be created ...
	I1001 18:34:06.558940  290782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 18:34:06.568751  290782 system_pods.go:86] 19 kube-system pods found
	I1001 18:34:06.568838  290782 system_pods.go:89] "coredns-66bc5c9577-84mdw" [c8996410-b266-4fca-9f7e-06a6ab0f1271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:34:06.568862  290782 system_pods.go:89] "csi-hostpath-attacher-0" [c8c0bfca-689e-4993-9977-4444fe25d48a] Pending
	I1001 18:34:06.568910  290782 system_pods.go:89] "csi-hostpath-resizer-0" [b15f8917-b254-46e5-85c6-f72b6ba37bd2] Pending
	I1001 18:34:06.568935  290782 system_pods.go:89] "csi-hostpathplugin-4hsdq" [55e68c17-2b31-4b0f-86f1-5cd9a123a80b] Pending
	I1001 18:34:06.568957  290782 system_pods.go:89] "etcd-addons-157757" [ecdb8cd7-3935-4411-be75-56ef1ac27649] Running
	I1001 18:34:06.568982  290782 system_pods.go:89] "kindnet-gqwn9" [b7538297-42dc-48c5-8a86-cd0cf0909585] Running
	I1001 18:34:06.569015  290782 system_pods.go:89] "kube-apiserver-addons-157757" [2d04f5dd-750b-4d7c-b9c0-babc15ec8183] Running
	I1001 18:34:06.569043  290782 system_pods.go:89] "kube-controller-manager-addons-157757" [0905c745-901d-4bf6-a888-5f3865889026] Running
	I1001 18:34:06.569067  290782 system_pods.go:89] "kube-ingress-dns-minikube" [e1499a85-a8a9-4dcf-a783-1552ed1e2df4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:34:06.569088  290782 system_pods.go:89] "kube-proxy-cd2p6" [b94bcb0f-03c5-4fc9-9a07-1621a1d154ba] Running
	I1001 18:34:06.569121  290782 system_pods.go:89] "kube-scheduler-addons-157757" [9c5fecc7-a875-413d-930f-3f1867a74d0d] Running
	I1001 18:34:06.569162  290782 system_pods.go:89] "metrics-server-85b7d694d7-tj2w5" [d2bbd030-0a2c-40dd-8cf1-66db6f6d2ca4] Pending
	I1001 18:34:06.569183  290782 system_pods.go:89] "nvidia-device-plugin-daemonset-ltnvc" [70d51da8-b222-413a-b7e3-18518bfefd2a] Pending
	I1001 18:34:06.569205  290782 system_pods.go:89] "registry-66898fdd98-hwz6l" [e083da18-8bb1-4b19-a43f-e6ec60f32ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:34:06.569236  290782 system_pods.go:89] "registry-creds-764b6fb674-kq6qc" [42e70c7d-aed4-4667-97f1-51cfa662bbe9] Pending
	I1001 18:34:06.569266  290782 system_pods.go:89] "registry-proxy-9fbtr" [fffe62e8-26fb-42b2-bf14-e16fe97877ea] Pending
	I1001 18:34:06.569286  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6nm7" [89ef13f5-9468-4cd8-9391-f066d017eeee] Pending
	I1001 18:34:06.569314  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fs5sl" [f0534c80-d0f4-461e-9eed-bc4bb58cc3f0] Pending
	I1001 18:34:06.569345  290782 system_pods.go:89] "storage-provisioner" [93f5a125-845d-4819-a87c-4aa4f8bca761] Pending
	I1001 18:34:06.569383  290782 retry.go:31] will retry after 207.886057ms: missing components: kube-dns
	I1001 18:34:06.799291  290782 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 18:34:06.799321  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:06.800410  290782 system_pods.go:86] 19 kube-system pods found
	I1001 18:34:06.800441  290782 system_pods.go:89] "coredns-66bc5c9577-84mdw" [c8996410-b266-4fca-9f7e-06a6ab0f1271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:34:06.800449  290782 system_pods.go:89] "csi-hostpath-attacher-0" [c8c0bfca-689e-4993-9977-4444fe25d48a] Pending
	I1001 18:34:06.800454  290782 system_pods.go:89] "csi-hostpath-resizer-0" [b15f8917-b254-46e5-85c6-f72b6ba37bd2] Pending
	I1001 18:34:06.800459  290782 system_pods.go:89] "csi-hostpathplugin-4hsdq" [55e68c17-2b31-4b0f-86f1-5cd9a123a80b] Pending
	I1001 18:34:06.800462  290782 system_pods.go:89] "etcd-addons-157757" [ecdb8cd7-3935-4411-be75-56ef1ac27649] Running
	I1001 18:34:06.800467  290782 system_pods.go:89] "kindnet-gqwn9" [b7538297-42dc-48c5-8a86-cd0cf0909585] Running
	I1001 18:34:06.800472  290782 system_pods.go:89] "kube-apiserver-addons-157757" [2d04f5dd-750b-4d7c-b9c0-babc15ec8183] Running
	I1001 18:34:06.800484  290782 system_pods.go:89] "kube-controller-manager-addons-157757" [0905c745-901d-4bf6-a888-5f3865889026] Running
	I1001 18:34:06.800497  290782 system_pods.go:89] "kube-ingress-dns-minikube" [e1499a85-a8a9-4dcf-a783-1552ed1e2df4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:34:06.800502  290782 system_pods.go:89] "kube-proxy-cd2p6" [b94bcb0f-03c5-4fc9-9a07-1621a1d154ba] Running
	I1001 18:34:06.800513  290782 system_pods.go:89] "kube-scheduler-addons-157757" [9c5fecc7-a875-413d-930f-3f1867a74d0d] Running
	I1001 18:34:06.800520  290782 system_pods.go:89] "metrics-server-85b7d694d7-tj2w5" [d2bbd030-0a2c-40dd-8cf1-66db6f6d2ca4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 18:34:06.800524  290782 system_pods.go:89] "nvidia-device-plugin-daemonset-ltnvc" [70d51da8-b222-413a-b7e3-18518bfefd2a] Pending
	I1001 18:34:06.800536  290782 system_pods.go:89] "registry-66898fdd98-hwz6l" [e083da18-8bb1-4b19-a43f-e6ec60f32ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:34:06.800541  290782 system_pods.go:89] "registry-creds-764b6fb674-kq6qc" [42e70c7d-aed4-4667-97f1-51cfa662bbe9] Pending
	I1001 18:34:06.800545  290782 system_pods.go:89] "registry-proxy-9fbtr" [fffe62e8-26fb-42b2-bf14-e16fe97877ea] Pending
	I1001 18:34:06.800549  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6nm7" [89ef13f5-9468-4cd8-9391-f066d017eeee] Pending
	I1001 18:34:06.800566  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fs5sl" [f0534c80-d0f4-461e-9eed-bc4bb58cc3f0] Pending
	I1001 18:34:06.800572  290782 system_pods.go:89] "storage-provisioner" [93f5a125-845d-4819-a87c-4aa4f8bca761] Pending
	I1001 18:34:06.800588  290782 retry.go:31] will retry after 334.685689ms: missing components: kube-dns
	I1001 18:34:07.018144  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:07.018343  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:07.018986  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:07.140986  290782 system_pods.go:86] 19 kube-system pods found
	I1001 18:34:07.141025  290782 system_pods.go:89] "coredns-66bc5c9577-84mdw" [c8996410-b266-4fca-9f7e-06a6ab0f1271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:34:07.141037  290782 system_pods.go:89] "csi-hostpath-attacher-0" [c8c0bfca-689e-4993-9977-4444fe25d48a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 18:34:07.141046  290782 system_pods.go:89] "csi-hostpath-resizer-0" [b15f8917-b254-46e5-85c6-f72b6ba37bd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 18:34:07.141054  290782 system_pods.go:89] "csi-hostpathplugin-4hsdq" [55e68c17-2b31-4b0f-86f1-5cd9a123a80b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 18:34:07.141059  290782 system_pods.go:89] "etcd-addons-157757" [ecdb8cd7-3935-4411-be75-56ef1ac27649] Running
	I1001 18:34:07.141064  290782 system_pods.go:89] "kindnet-gqwn9" [b7538297-42dc-48c5-8a86-cd0cf0909585] Running
	I1001 18:34:07.141074  290782 system_pods.go:89] "kube-apiserver-addons-157757" [2d04f5dd-750b-4d7c-b9c0-babc15ec8183] Running
	I1001 18:34:07.141079  290782 system_pods.go:89] "kube-controller-manager-addons-157757" [0905c745-901d-4bf6-a888-5f3865889026] Running
	I1001 18:34:07.141089  290782 system_pods.go:89] "kube-ingress-dns-minikube" [e1499a85-a8a9-4dcf-a783-1552ed1e2df4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:34:07.141093  290782 system_pods.go:89] "kube-proxy-cd2p6" [b94bcb0f-03c5-4fc9-9a07-1621a1d154ba] Running
	I1001 18:34:07.141104  290782 system_pods.go:89] "kube-scheduler-addons-157757" [9c5fecc7-a875-413d-930f-3f1867a74d0d] Running
	I1001 18:34:07.141111  290782 system_pods.go:89] "metrics-server-85b7d694d7-tj2w5" [d2bbd030-0a2c-40dd-8cf1-66db6f6d2ca4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 18:34:07.141122  290782 system_pods.go:89] "nvidia-device-plugin-daemonset-ltnvc" [70d51da8-b222-413a-b7e3-18518bfefd2a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 18:34:07.141129  290782 system_pods.go:89] "registry-66898fdd98-hwz6l" [e083da18-8bb1-4b19-a43f-e6ec60f32ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:34:07.141136  290782 system_pods.go:89] "registry-creds-764b6fb674-kq6qc" [42e70c7d-aed4-4667-97f1-51cfa662bbe9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1001 18:34:07.141143  290782 system_pods.go:89] "registry-proxy-9fbtr" [fffe62e8-26fb-42b2-bf14-e16fe97877ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 18:34:07.141149  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6nm7" [89ef13f5-9468-4cd8-9391-f066d017eeee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:07.141159  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fs5sl" [f0534c80-d0f4-461e-9eed-bc4bb58cc3f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:07.141169  290782 system_pods.go:89] "storage-provisioner" [93f5a125-845d-4819-a87c-4aa4f8bca761] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 18:34:07.141186  290782 retry.go:31] will retry after 349.254ms: missing components: kube-dns
	I1001 18:34:07.270298  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:07.507603  290782 system_pods.go:86] 19 kube-system pods found
	I1001 18:34:07.507641  290782 system_pods.go:89] "coredns-66bc5c9577-84mdw" [c8996410-b266-4fca-9f7e-06a6ab0f1271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:34:07.507650  290782 system_pods.go:89] "csi-hostpath-attacher-0" [c8c0bfca-689e-4993-9977-4444fe25d48a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 18:34:07.507659  290782 system_pods.go:89] "csi-hostpath-resizer-0" [b15f8917-b254-46e5-85c6-f72b6ba37bd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 18:34:07.507666  290782 system_pods.go:89] "csi-hostpathplugin-4hsdq" [55e68c17-2b31-4b0f-86f1-5cd9a123a80b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 18:34:07.507670  290782 system_pods.go:89] "etcd-addons-157757" [ecdb8cd7-3935-4411-be75-56ef1ac27649] Running
	I1001 18:34:07.507676  290782 system_pods.go:89] "kindnet-gqwn9" [b7538297-42dc-48c5-8a86-cd0cf0909585] Running
	I1001 18:34:07.507680  290782 system_pods.go:89] "kube-apiserver-addons-157757" [2d04f5dd-750b-4d7c-b9c0-babc15ec8183] Running
	I1001 18:34:07.507685  290782 system_pods.go:89] "kube-controller-manager-addons-157757" [0905c745-901d-4bf6-a888-5f3865889026] Running
	I1001 18:34:07.507692  290782 system_pods.go:89] "kube-ingress-dns-minikube" [e1499a85-a8a9-4dcf-a783-1552ed1e2df4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:34:07.507697  290782 system_pods.go:89] "kube-proxy-cd2p6" [b94bcb0f-03c5-4fc9-9a07-1621a1d154ba] Running
	I1001 18:34:07.507704  290782 system_pods.go:89] "kube-scheduler-addons-157757" [9c5fecc7-a875-413d-930f-3f1867a74d0d] Running
	I1001 18:34:07.507710  290782 system_pods.go:89] "metrics-server-85b7d694d7-tj2w5" [d2bbd030-0a2c-40dd-8cf1-66db6f6d2ca4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 18:34:07.507747  290782 system_pods.go:89] "nvidia-device-plugin-daemonset-ltnvc" [70d51da8-b222-413a-b7e3-18518bfefd2a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 18:34:07.507761  290782 system_pods.go:89] "registry-66898fdd98-hwz6l" [e083da18-8bb1-4b19-a43f-e6ec60f32ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:34:07.507771  290782 system_pods.go:89] "registry-creds-764b6fb674-kq6qc" [42e70c7d-aed4-4667-97f1-51cfa662bbe9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1001 18:34:07.507792  290782 system_pods.go:89] "registry-proxy-9fbtr" [fffe62e8-26fb-42b2-bf14-e16fe97877ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 18:34:07.507803  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6nm7" [89ef13f5-9468-4cd8-9391-f066d017eeee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:07.507810  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fs5sl" [f0534c80-d0f4-461e-9eed-bc4bb58cc3f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:07.507817  290782 system_pods.go:89] "storage-provisioner" [93f5a125-845d-4819-a87c-4aa4f8bca761] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 18:34:07.507831  290782 retry.go:31] will retry after 581.252109ms: missing components: kube-dns
	I1001 18:34:07.536027  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:07.536434  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:07.536752  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:07.769239  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:08.010774  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:08.011179  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:08.011507  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:08.094855  290782 system_pods.go:86] 19 kube-system pods found
	I1001 18:34:08.094899  290782 system_pods.go:89] "coredns-66bc5c9577-84mdw" [c8996410-b266-4fca-9f7e-06a6ab0f1271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:34:08.094910  290782 system_pods.go:89] "csi-hostpath-attacher-0" [c8c0bfca-689e-4993-9977-4444fe25d48a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 18:34:08.094919  290782 system_pods.go:89] "csi-hostpath-resizer-0" [b15f8917-b254-46e5-85c6-f72b6ba37bd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 18:34:08.094926  290782 system_pods.go:89] "csi-hostpathplugin-4hsdq" [55e68c17-2b31-4b0f-86f1-5cd9a123a80b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 18:34:08.094932  290782 system_pods.go:89] "etcd-addons-157757" [ecdb8cd7-3935-4411-be75-56ef1ac27649] Running
	I1001 18:34:08.094942  290782 system_pods.go:89] "kindnet-gqwn9" [b7538297-42dc-48c5-8a86-cd0cf0909585] Running
	I1001 18:34:08.094947  290782 system_pods.go:89] "kube-apiserver-addons-157757" [2d04f5dd-750b-4d7c-b9c0-babc15ec8183] Running
	I1001 18:34:08.094951  290782 system_pods.go:89] "kube-controller-manager-addons-157757" [0905c745-901d-4bf6-a888-5f3865889026] Running
	I1001 18:34:08.094958  290782 system_pods.go:89] "kube-ingress-dns-minikube" [e1499a85-a8a9-4dcf-a783-1552ed1e2df4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:34:08.094962  290782 system_pods.go:89] "kube-proxy-cd2p6" [b94bcb0f-03c5-4fc9-9a07-1621a1d154ba] Running
	I1001 18:34:08.094969  290782 system_pods.go:89] "kube-scheduler-addons-157757" [9c5fecc7-a875-413d-930f-3f1867a74d0d] Running
	I1001 18:34:08.094975  290782 system_pods.go:89] "metrics-server-85b7d694d7-tj2w5" [d2bbd030-0a2c-40dd-8cf1-66db6f6d2ca4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 18:34:08.094984  290782 system_pods.go:89] "nvidia-device-plugin-daemonset-ltnvc" [70d51da8-b222-413a-b7e3-18518bfefd2a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 18:34:08.094991  290782 system_pods.go:89] "registry-66898fdd98-hwz6l" [e083da18-8bb1-4b19-a43f-e6ec60f32ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:34:08.094997  290782 system_pods.go:89] "registry-creds-764b6fb674-kq6qc" [42e70c7d-aed4-4667-97f1-51cfa662bbe9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1001 18:34:08.095005  290782 system_pods.go:89] "registry-proxy-9fbtr" [fffe62e8-26fb-42b2-bf14-e16fe97877ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 18:34:08.095013  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6nm7" [89ef13f5-9468-4cd8-9391-f066d017eeee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:08.095022  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fs5sl" [f0534c80-d0f4-461e-9eed-bc4bb58cc3f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:08.095029  290782 system_pods.go:89] "storage-provisioner" [93f5a125-845d-4819-a87c-4aa4f8bca761] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 18:34:08.095045  290782 retry.go:31] will retry after 533.544635ms: missing components: kube-dns
	I1001 18:34:08.269113  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:08.511587  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:08.511613  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:08.511986  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:08.651961  290782 system_pods.go:86] 19 kube-system pods found
	I1001 18:34:08.652047  290782 system_pods.go:89] "coredns-66bc5c9577-84mdw" [c8996410-b266-4fca-9f7e-06a6ab0f1271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:34:08.652075  290782 system_pods.go:89] "csi-hostpath-attacher-0" [c8c0bfca-689e-4993-9977-4444fe25d48a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 18:34:08.652116  290782 system_pods.go:89] "csi-hostpath-resizer-0" [b15f8917-b254-46e5-85c6-f72b6ba37bd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 18:34:08.652143  290782 system_pods.go:89] "csi-hostpathplugin-4hsdq" [55e68c17-2b31-4b0f-86f1-5cd9a123a80b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 18:34:08.652172  290782 system_pods.go:89] "etcd-addons-157757" [ecdb8cd7-3935-4411-be75-56ef1ac27649] Running
	I1001 18:34:08.652206  290782 system_pods.go:89] "kindnet-gqwn9" [b7538297-42dc-48c5-8a86-cd0cf0909585] Running
	I1001 18:34:08.652231  290782 system_pods.go:89] "kube-apiserver-addons-157757" [2d04f5dd-750b-4d7c-b9c0-babc15ec8183] Running
	I1001 18:34:08.652252  290782 system_pods.go:89] "kube-controller-manager-addons-157757" [0905c745-901d-4bf6-a888-5f3865889026] Running
	I1001 18:34:08.652300  290782 system_pods.go:89] "kube-ingress-dns-minikube" [e1499a85-a8a9-4dcf-a783-1552ed1e2df4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:34:08.652324  290782 system_pods.go:89] "kube-proxy-cd2p6" [b94bcb0f-03c5-4fc9-9a07-1621a1d154ba] Running
	I1001 18:34:08.652346  290782 system_pods.go:89] "kube-scheduler-addons-157757" [9c5fecc7-a875-413d-930f-3f1867a74d0d] Running
	I1001 18:34:08.652391  290782 system_pods.go:89] "metrics-server-85b7d694d7-tj2w5" [d2bbd030-0a2c-40dd-8cf1-66db6f6d2ca4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 18:34:08.652424  290782 system_pods.go:89] "nvidia-device-plugin-daemonset-ltnvc" [70d51da8-b222-413a-b7e3-18518bfefd2a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 18:34:08.652459  290782 system_pods.go:89] "registry-66898fdd98-hwz6l" [e083da18-8bb1-4b19-a43f-e6ec60f32ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:34:08.652489  290782 system_pods.go:89] "registry-creds-764b6fb674-kq6qc" [42e70c7d-aed4-4667-97f1-51cfa662bbe9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1001 18:34:08.652509  290782 system_pods.go:89] "registry-proxy-9fbtr" [fffe62e8-26fb-42b2-bf14-e16fe97877ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 18:34:08.652545  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6nm7" [89ef13f5-9468-4cd8-9391-f066d017eeee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:08.652577  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fs5sl" [f0534c80-d0f4-461e-9eed-bc4bb58cc3f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:08.652595  290782 system_pods.go:89] "storage-provisioner" [93f5a125-845d-4819-a87c-4aa4f8bca761] Running
	I1001 18:34:08.652648  290782 retry.go:31] will retry after 747.806502ms: missing components: kube-dns
	I1001 18:34:08.768866  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:09.011423  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:09.011682  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:09.011717  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:09.269133  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:09.407432  290782 system_pods.go:86] 19 kube-system pods found
	I1001 18:34:09.407529  290782 system_pods.go:89] "coredns-66bc5c9577-84mdw" [c8996410-b266-4fca-9f7e-06a6ab0f1271] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:34:09.407554  290782 system_pods.go:89] "csi-hostpath-attacher-0" [c8c0bfca-689e-4993-9977-4444fe25d48a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 18:34:09.407596  290782 system_pods.go:89] "csi-hostpath-resizer-0" [b15f8917-b254-46e5-85c6-f72b6ba37bd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 18:34:09.407624  290782 system_pods.go:89] "csi-hostpathplugin-4hsdq" [55e68c17-2b31-4b0f-86f1-5cd9a123a80b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 18:34:09.407645  290782 system_pods.go:89] "etcd-addons-157757" [ecdb8cd7-3935-4411-be75-56ef1ac27649] Running
	I1001 18:34:09.407684  290782 system_pods.go:89] "kindnet-gqwn9" [b7538297-42dc-48c5-8a86-cd0cf0909585] Running
	I1001 18:34:09.407710  290782 system_pods.go:89] "kube-apiserver-addons-157757" [2d04f5dd-750b-4d7c-b9c0-babc15ec8183] Running
	I1001 18:34:09.407733  290782 system_pods.go:89] "kube-controller-manager-addons-157757" [0905c745-901d-4bf6-a888-5f3865889026] Running
	I1001 18:34:09.407773  290782 system_pods.go:89] "kube-ingress-dns-minikube" [e1499a85-a8a9-4dcf-a783-1552ed1e2df4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:34:09.407798  290782 system_pods.go:89] "kube-proxy-cd2p6" [b94bcb0f-03c5-4fc9-9a07-1621a1d154ba] Running
	I1001 18:34:09.407820  290782 system_pods.go:89] "kube-scheduler-addons-157757" [9c5fecc7-a875-413d-930f-3f1867a74d0d] Running
	I1001 18:34:09.407858  290782 system_pods.go:89] "metrics-server-85b7d694d7-tj2w5" [d2bbd030-0a2c-40dd-8cf1-66db6f6d2ca4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 18:34:09.407886  290782 system_pods.go:89] "nvidia-device-plugin-daemonset-ltnvc" [70d51da8-b222-413a-b7e3-18518bfefd2a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 18:34:09.407913  290782 system_pods.go:89] "registry-66898fdd98-hwz6l" [e083da18-8bb1-4b19-a43f-e6ec60f32ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:34:09.407947  290782 system_pods.go:89] "registry-creds-764b6fb674-kq6qc" [42e70c7d-aed4-4667-97f1-51cfa662bbe9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1001 18:34:09.407973  290782 system_pods.go:89] "registry-proxy-9fbtr" [fffe62e8-26fb-42b2-bf14-e16fe97877ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 18:34:09.407997  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6nm7" [89ef13f5-9468-4cd8-9391-f066d017eeee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:09.408035  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fs5sl" [f0534c80-d0f4-461e-9eed-bc4bb58cc3f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:09.408061  290782 system_pods.go:89] "storage-provisioner" [93f5a125-845d-4819-a87c-4aa4f8bca761] Running
	I1001 18:34:09.408095  290782 retry.go:31] will retry after 854.271642ms: missing components: kube-dns
	I1001 18:34:09.512199  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:09.512602  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:09.512969  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:09.769430  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:10.009515  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:10.011610  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:10.012148  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:10.267538  290782 system_pods.go:86] 19 kube-system pods found
	I1001 18:34:10.267575  290782 system_pods.go:89] "coredns-66bc5c9577-84mdw" [c8996410-b266-4fca-9f7e-06a6ab0f1271] Running
	I1001 18:34:10.267587  290782 system_pods.go:89] "csi-hostpath-attacher-0" [c8c0bfca-689e-4993-9977-4444fe25d48a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 18:34:10.267594  290782 system_pods.go:89] "csi-hostpath-resizer-0" [b15f8917-b254-46e5-85c6-f72b6ba37bd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 18:34:10.267602  290782 system_pods.go:89] "csi-hostpathplugin-4hsdq" [55e68c17-2b31-4b0f-86f1-5cd9a123a80b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 18:34:10.267607  290782 system_pods.go:89] "etcd-addons-157757" [ecdb8cd7-3935-4411-be75-56ef1ac27649] Running
	I1001 18:34:10.267611  290782 system_pods.go:89] "kindnet-gqwn9" [b7538297-42dc-48c5-8a86-cd0cf0909585] Running
	I1001 18:34:10.267622  290782 system_pods.go:89] "kube-apiserver-addons-157757" [2d04f5dd-750b-4d7c-b9c0-babc15ec8183] Running
	I1001 18:34:10.267626  290782 system_pods.go:89] "kube-controller-manager-addons-157757" [0905c745-901d-4bf6-a888-5f3865889026] Running
	I1001 18:34:10.267634  290782 system_pods.go:89] "kube-ingress-dns-minikube" [e1499a85-a8a9-4dcf-a783-1552ed1e2df4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1001 18:34:10.267643  290782 system_pods.go:89] "kube-proxy-cd2p6" [b94bcb0f-03c5-4fc9-9a07-1621a1d154ba] Running
	I1001 18:34:10.267648  290782 system_pods.go:89] "kube-scheduler-addons-157757" [9c5fecc7-a875-413d-930f-3f1867a74d0d] Running
	I1001 18:34:10.267654  290782 system_pods.go:89] "metrics-server-85b7d694d7-tj2w5" [d2bbd030-0a2c-40dd-8cf1-66db6f6d2ca4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 18:34:10.267665  290782 system_pods.go:89] "nvidia-device-plugin-daemonset-ltnvc" [70d51da8-b222-413a-b7e3-18518bfefd2a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 18:34:10.267672  290782 system_pods.go:89] "registry-66898fdd98-hwz6l" [e083da18-8bb1-4b19-a43f-e6ec60f32ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 18:34:10.267680  290782 system_pods.go:89] "registry-creds-764b6fb674-kq6qc" [42e70c7d-aed4-4667-97f1-51cfa662bbe9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1001 18:34:10.267690  290782 system_pods.go:89] "registry-proxy-9fbtr" [fffe62e8-26fb-42b2-bf14-e16fe97877ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 18:34:10.267696  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b6nm7" [89ef13f5-9468-4cd8-9391-f066d017eeee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:10.267703  290782 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fs5sl" [f0534c80-d0f4-461e-9eed-bc4bb58cc3f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 18:34:10.267707  290782 system_pods.go:89] "storage-provisioner" [93f5a125-845d-4819-a87c-4aa4f8bca761] Running
	I1001 18:34:10.267716  290782 system_pods.go:126] duration metric: took 3.708755637s to wait for k8s-apps to be running ...
	I1001 18:34:10.267727  290782 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 18:34:10.267788  290782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:34:10.270162  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:10.291585  290782 system_svc.go:56] duration metric: took 23.848825ms WaitForService to wait for kubelet
	I1001 18:34:10.291616  290782 kubeadm.go:578] duration metric: took 49.758281391s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:34:10.291635  290782 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:34:10.294825  290782 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1001 18:34:10.294856  290782 node_conditions.go:123] node cpu capacity is 2
	I1001 18:34:10.294872  290782 node_conditions.go:105] duration metric: took 3.231191ms to run NodePressure ...
	I1001 18:34:10.294884  290782 start.go:241] waiting for startup goroutines ...
	I1001 18:34:10.512137  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:10.512516  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:10.512915  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:10.769214  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:11.012407  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:11.012731  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:11.013316  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:11.273744  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:11.510238  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:11.510414  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:11.511787  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:11.769588  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:12.010671  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:12.010764  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:12.011097  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:12.268636  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:12.509606  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:12.509701  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:12.511687  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:12.769531  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:13.012544  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:13.012867  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:13.013161  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:13.269626  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:13.512900  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:13.512996  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:13.513544  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:13.769667  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:14.012537  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:14.013062  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:14.013376  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:14.271288  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:14.512794  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:14.512865  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:14.513796  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:14.772455  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:15.013448  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:15.013881  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:15.014496  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:15.268375  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:15.515003  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:15.515213  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:15.516101  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:15.768607  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:16.014179  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:16.014904  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:16.015884  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:16.273057  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:16.507672  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:16.510402  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:16.510631  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:16.768297  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:17.012222  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:17.012595  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:17.018972  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:17.268640  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:17.507614  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:17.509145  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:17.510216  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:17.768622  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:18.009592  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:18.009829  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:18.010441  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:18.270900  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:18.514823  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:18.514996  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:18.515639  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:18.767960  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:19.016431  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:19.017072  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:19.017828  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:19.269016  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:19.507940  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:19.509845  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:19.510007  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:19.768679  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:20.013029  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:20.017339  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:20.018259  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:20.269551  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:20.510475  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:20.510930  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:20.511096  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:20.769730  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:21.011131  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:21.011289  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:21.011655  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:21.268170  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:21.510555  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:21.510756  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:21.511099  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:21.769230  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:22.011584  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:22.011728  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:22.011796  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:22.013044  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:34:22.287862  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:22.516889  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:22.516984  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:22.517469  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:22.768646  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:23.029693  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:23.029825  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:23.033038  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:23.276936  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:23.339704  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.32663038s)
	W1001 18:34:23.339738  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:34:23.339759  290782 retry.go:31] will retry after 29.942357952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1001 18:34:23.507907  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:23.511278  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:23.511472  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:23.771080  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:24.008174  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:24.011981  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:24.012071  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:24.268957  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:24.508716  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:24.508885  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:24.510981  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:24.769632  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:25.008382  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:25.009792  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:25.010815  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:25.268799  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:25.509953  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:25.510138  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:25.510510  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:25.769900  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:26.010187  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:26.010398  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:26.010940  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:26.268219  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:26.512962  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:26.513337  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:26.515288  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:26.768921  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:27.013069  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:27.013623  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:27.014335  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:27.269938  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:27.512666  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:27.516982  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:27.517443  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:27.771446  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:28.025996  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:28.030344  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:28.033982  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:28.269772  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:28.519692  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:28.519791  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:28.519997  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:28.768190  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:29.009426  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:29.009481  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:29.011104  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:29.269745  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:29.510680  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:29.510835  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:29.513337  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:29.769172  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:30.011053  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:30.011634  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:30.012320  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:30.269012  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:30.510818  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:30.511058  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:30.511560  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:30.768591  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:31.009512  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:31.009735  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:31.011437  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:31.271052  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:31.509698  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:31.510275  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:31.510588  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:31.769277  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:32.007983  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:32.009591  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:32.010569  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:32.270416  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:32.510608  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:32.510911  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:32.512203  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:32.769028  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:33.029800  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:33.029921  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:33.030557  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:33.273436  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:33.512200  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:33.512357  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:33.512729  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:33.820366  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:34.044420  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:34.047742  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:34.048733  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:34.269960  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:34.511824  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:34.512986  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:34.513911  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:34.770058  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:35.014636  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:35.018242  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:35.018546  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:35.267907  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:35.516505  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:35.516863  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:35.518163  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:35.771452  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:36.009606  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:36.009908  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:36.011771  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:36.268424  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:36.514538  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:36.514698  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:36.514938  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:36.769000  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:37.009404  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:37.016687  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:37.025720  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:37.269136  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:37.511751  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:37.512248  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:37.512577  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:37.768179  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:38.020231  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:38.020646  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:38.020787  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:38.268369  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:38.510841  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:38.512050  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:38.512230  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:38.780795  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:39.023780  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:39.026977  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:39.031340  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:39.273476  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:39.513695  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:39.514299  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:39.517951  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:39.780698  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:40.010617  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:40.010700  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:40.011591  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:40.291178  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:40.511308  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:40.511541  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:40.511915  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:40.768588  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:41.008236  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:41.010353  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:41.010539  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:41.268772  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:41.508187  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:41.510299  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:41.510903  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:41.768405  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:42.009754  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:42.009882  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:42.011048  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:42.269297  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:42.512362  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:42.512454  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:42.512793  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:42.767706  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:43.009006  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:43.012314  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:43.022466  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:43.268380  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:43.511472  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:43.512998  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:43.513351  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:43.768809  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:44.011845  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:44.011956  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:44.012075  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:44.292146  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:44.512080  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:44.512250  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:44.515184  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:44.769668  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:45.012384  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:45.012597  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:45.012650  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:45.270289  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:45.515489  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:45.515658  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:45.515728  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:45.775204  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:46.024919  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:46.025059  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:46.026210  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:46.269394  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:46.511785  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:46.516938  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:46.517517  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:46.769814  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:47.009498  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:47.012006  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:47.012629  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:47.268560  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:47.510732  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:47.510846  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:47.511478  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:47.769542  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:48.010741  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:48.011191  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:48.011247  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:48.270092  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:48.512670  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:48.513156  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:48.516471  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:48.768486  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:49.010955  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:49.012831  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:49.012960  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:49.269039  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:49.511133  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:49.511275  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:49.511945  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:49.769387  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:50.013108  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:50.020412  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:50.022490  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:50.270280  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:50.522632  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:50.522762  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:50.522922  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:50.769688  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:51.009088  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:51.011398  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:51.011775  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:51.269119  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:51.510450  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:51.512328  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:51.515487  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:51.769613  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:52.009890  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:52.010138  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:52.010981  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:52.269837  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:52.511326  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:52.514967  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:52.515225  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:52.776278  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:53.010611  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:53.013552  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:53.014586  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:53.268347  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:53.282692  290782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1001 18:34:53.507704  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:53.515582  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:53.515689  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:53.768035  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:54.010189  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:54.010551  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:54.010636  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:54.269429  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:54.382055  290782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.099269525s)
	W1001 18:34:54.382154  290782 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1001 18:34:54.382271  290782 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1001 18:34:54.510056  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:54.510298  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:54.510460  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:54.769046  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:55.025393  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:55.026377  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:55.026869  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:55.268513  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:55.508739  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:55.509993  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:55.510933  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:55.768399  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:56.008917  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:56.010222  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:56.012515  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:56.268087  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:56.508676  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:56.509829  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:56.510854  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:56.769190  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:57.023947  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:57.024208  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:57.024840  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:57.268347  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:57.512942  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:57.513052  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:57.513771  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:57.768616  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:58.011848  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:58.012235  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:58.013389  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:58.270278  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:58.508438  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:58.509654  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 18:34:58.511132  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:58.768522  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:59.008528  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:59.010648  290782 kapi.go:107] duration metric: took 1m32.504575089s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 18:34:59.010829  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:59.268872  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:34:59.508740  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:34:59.511938  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:34:59.769661  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:00.010897  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:00.011009  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:00.281928  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:00.510476  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:00.541951  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:00.769143  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:01.009074  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:01.011259  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:01.270007  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:01.508657  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:01.510953  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:01.768556  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:02.011559  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:02.011941  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:02.269265  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:02.508332  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:02.510506  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:02.768339  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:03.009231  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:03.011739  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:03.267766  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:03.507733  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:03.516933  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:03.768329  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:04.008155  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:04.010601  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:04.269513  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:04.513943  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:04.515470  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:04.770976  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:05.010025  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:05.011528  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:05.279257  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:05.508857  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:05.511932  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:05.768176  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:06.007912  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:06.010456  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:06.269848  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:06.508077  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:06.510352  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:06.785145  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:07.015683  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:07.015964  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:07.268731  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:07.513030  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:07.513864  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:07.768711  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:08.033947  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:08.034110  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:08.278871  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:08.508851  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:08.512685  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:08.770910  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:09.009991  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:09.017783  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:09.268803  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:09.509723  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:09.511577  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:09.768542  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:10.009259  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:10.010967  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:10.277266  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:10.509346  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:10.514100  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:10.769651  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:11.008745  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:11.011463  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:11.267889  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:11.511284  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:11.512612  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:11.769684  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:12.015835  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:12.016316  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:12.269390  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:12.509679  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:12.509957  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:12.768947  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:13.018101  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:13.018981  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:13.268747  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:13.508533  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:13.511544  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:13.769139  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:14.010417  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:14.011918  290782 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 18:35:14.268748  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:14.509176  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:14.511289  290782 kapi.go:107] duration metric: took 1m48.004314765s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 18:35:14.769658  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:15.009807  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:15.277452  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:15.513194  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:15.770312  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:16.009336  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:16.269587  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:16.523736  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:16.768348  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:17.008844  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:17.268610  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:17.508986  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:17.768581  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:18.008913  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:18.272929  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:18.508752  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:18.768247  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:19.008787  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:19.268105  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:19.508947  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:19.769292  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:20.008809  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 18:35:20.268988  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:20.508195  290782 kapi.go:107] duration metric: took 1m47.503278204s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 18:35:20.511253  290782 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-157757 cluster.
	I1001 18:35:20.513992  290782 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 18:35:20.516756  290782 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 18:35:20.768489  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:21.268535  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:21.768471  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:22.269012  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:22.768244  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:23.268691  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:23.768178  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:24.269286  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:24.771074  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:25.271867  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:25.769720  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:26.268712  290782 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 18:35:26.768607  290782 kapi.go:107] duration metric: took 1m59.503880178s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 18:35:26.771920  290782 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, ingress-dns, storage-provisioner, registry-creds, metrics-server, nvidia-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1001 18:35:26.774858  290782 addons.go:514] duration metric: took 2m6.24111891s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin ingress-dns storage-provisioner registry-creds metrics-server nvidia-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1001 18:35:26.774918  290782 start.go:246] waiting for cluster config update ...
	I1001 18:35:26.774939  290782 start.go:255] writing updated cluster config ...
	I1001 18:35:26.775241  290782 ssh_runner.go:195] Run: rm -f paused
	I1001 18:35:26.778563  290782 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:35:26.781837  290782 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-84mdw" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:26.788396  290782 pod_ready.go:94] pod "coredns-66bc5c9577-84mdw" is "Ready"
	I1001 18:35:26.788470  290782 pod_ready.go:86] duration metric: took 6.594608ms for pod "coredns-66bc5c9577-84mdw" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:26.791014  290782 pod_ready.go:83] waiting for pod "etcd-addons-157757" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:26.797009  290782 pod_ready.go:94] pod "etcd-addons-157757" is "Ready"
	I1001 18:35:26.797048  290782 pod_ready.go:86] duration metric: took 6.005543ms for pod "etcd-addons-157757" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:26.804869  290782 pod_ready.go:83] waiting for pod "kube-apiserver-addons-157757" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:26.809960  290782 pod_ready.go:94] pod "kube-apiserver-addons-157757" is "Ready"
	I1001 18:35:26.810028  290782 pod_ready.go:86] duration metric: took 5.128362ms for pod "kube-apiserver-addons-157757" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:26.812824  290782 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-157757" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:27.183227  290782 pod_ready.go:94] pod "kube-controller-manager-addons-157757" is "Ready"
	I1001 18:35:27.183254  290782 pod_ready.go:86] duration metric: took 370.404251ms for pod "kube-controller-manager-addons-157757" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:27.384338  290782 pod_ready.go:83] waiting for pod "kube-proxy-cd2p6" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:27.782829  290782 pod_ready.go:94] pod "kube-proxy-cd2p6" is "Ready"
	I1001 18:35:27.782862  290782 pod_ready.go:86] duration metric: took 398.492118ms for pod "kube-proxy-cd2p6" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:27.983306  290782 pod_ready.go:83] waiting for pod "kube-scheduler-addons-157757" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:28.383069  290782 pod_ready.go:94] pod "kube-scheduler-addons-157757" is "Ready"
	I1001 18:35:28.383101  290782 pod_ready.go:86] duration metric: took 399.760426ms for pod "kube-scheduler-addons-157757" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:35:28.383116  290782 pod_ready.go:40] duration metric: took 1.604520777s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:35:28.443775  290782 start.go:620] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1001 18:35:28.447010  290782 out.go:179] * Done! kubectl is now configured to use "addons-157757" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 18:37:16 addons-157757 crio[985]: time="2025-10-01 18:37:16.268557564Z" level=info msg="Removed pod sandbox: 348c7bd6556ecef64f457ad155490e4c144f5fab89e0dda26d0d19f439f06d51" id=b6bb572f-d066-4bf5-b116-78b679514176 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.029364137Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-ctxzj/POD" id=45207dd0-c019-42a8-a84d-a66ac66b4bd4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.029441956Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.062019371Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-ctxzj Namespace:default ID:d5cd9457104692dbbe4612f9434b5ff3371c4f91914ef5f932c204288c593db3 UID:6d133829-fc78-4f3c-a82d-b6336753d327 NetNS:/var/run/netns/527ea3c0-7845-46f7-9d8a-df7a348a7b06 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.062230015Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-ctxzj to CNI network \"kindnet\" (type=ptp)"
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.077139714Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-ctxzj Namespace:default ID:d5cd9457104692dbbe4612f9434b5ff3371c4f91914ef5f932c204288c593db3 UID:6d133829-fc78-4f3c-a82d-b6336753d327 NetNS:/var/run/netns/527ea3c0-7845-46f7-9d8a-df7a348a7b06 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.077304442Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-ctxzj for CNI network kindnet (type=ptp)"
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.081571618Z" level=info msg="Ran pod sandbox d5cd9457104692dbbe4612f9434b5ff3371c4f91914ef5f932c204288c593db3 with infra container: default/hello-world-app-5d498dc89-ctxzj/POD" id=45207dd0-c019-42a8-a84d-a66ac66b4bd4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.082982840Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=fecbb0ba-c6a4-417d-9b5f-7261c8f4ffbd name=/runtime.v1.ImageService/ImageStatus
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.083218665Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=fecbb0ba-c6a4-417d-9b5f-7261c8f4ffbd name=/runtime.v1.ImageService/ImageStatus
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.084467416Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=791645b8-9f2d-411e-8d1d-591ac67c202c name=/runtime.v1.ImageService/PullImage
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.087178195Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.301390224Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.963281347Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=791645b8-9f2d-411e-8d1d-591ac67c202c name=/runtime.v1.ImageService/PullImage
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.963969091Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e3917a44-a3c4-49d2-9374-2a685c9b1a11 name=/runtime.v1.ImageService/ImageStatus
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.964629922Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e3917a44-a3c4-49d2-9374-2a685c9b1a11 name=/runtime.v1.ImageService/ImageStatus
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.965436100Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=041f7b34-c941-4cbc-a396-350338d88dc4 name=/runtime.v1.ImageService/ImageStatus
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.966045722Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=041f7b34-c941-4cbc-a396-350338d88dc4 name=/runtime.v1.ImageService/ImageStatus
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.971369909Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-ctxzj/hello-world-app" id=7d610e1d-8732-43a3-8728-0f7651909bb7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 01 18:39:31 addons-157757 crio[985]: time="2025-10-01 18:39:31.971469430Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 01 18:39:32 addons-157757 crio[985]: time="2025-10-01 18:39:32.002476221Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/31bb9a1c6e0800eeb30d423a32fc5f6445bc5fa203b4aa5f6fb25cd5fecadadc/merged/etc/passwd: no such file or directory"
	Oct 01 18:39:32 addons-157757 crio[985]: time="2025-10-01 18:39:32.002658459Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/31bb9a1c6e0800eeb30d423a32fc5f6445bc5fa203b4aa5f6fb25cd5fecadadc/merged/etc/group: no such file or directory"
	Oct 01 18:39:32 addons-157757 crio[985]: time="2025-10-01 18:39:32.076600608Z" level=info msg="Created container 868d9f92523d5b21f4419d3b7fc8a574ed867eb47c821533e71fb447c2b0709b: default/hello-world-app-5d498dc89-ctxzj/hello-world-app" id=7d610e1d-8732-43a3-8728-0f7651909bb7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 01 18:39:32 addons-157757 crio[985]: time="2025-10-01 18:39:32.077740738Z" level=info msg="Starting container: 868d9f92523d5b21f4419d3b7fc8a574ed867eb47c821533e71fb447c2b0709b" id=eb41c764-a612-4260-8ac8-b8bf74cc1de9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 01 18:39:32 addons-157757 crio[985]: time="2025-10-01 18:39:32.085587707Z" level=info msg="Started container" PID=9875 containerID=868d9f92523d5b21f4419d3b7fc8a574ed867eb47c821533e71fb447c2b0709b description=default/hello-world-app-5d498dc89-ctxzj/hello-world-app id=eb41c764-a612-4260-8ac8-b8bf74cc1de9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d5cd9457104692dbbe4612f9434b5ff3371c4f91914ef5f932c204288c593db3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	868d9f92523d5       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   d5cd945710469       hello-world-app-5d498dc89-ctxzj
	0a0dd27e7c98b       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   a3feca671f169       nginx
	c96f978b397b7       ghcr.io/headlamp-k8s/headlamp@sha256:7603bcb0ad1a485ef12d4bc84e8e2b4c368d0d4df841ab9df8171e2f2b0e0710                        2 minutes ago            Running             headlamp                  0                   588d5dd4d217d       headlamp-85f8f8dc54-p48kw
	739186689c717       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   19e293d181392       busybox
	016554dcdbb76       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             4 minutes ago            Running             controller                0                   9def89588fa95       ingress-nginx-controller-9cc49f96f-f2kn7
	21df34a84f3b0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            4 minutes ago            Running             gadget                    0                   6eb56757286c2       gadget-g9vlb
	d9d8f280880ca       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              patch                     0                   ac6f8b36dba4a       ingress-nginx-admission-patch-gfhzm
	68dc567ca7fe7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              create                    0                   31a34adc0b58d       ingress-nginx-admission-create-wqtmw
	85f9d7a9de0d5       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958               5 minutes ago            Running             minikube-ingress-dns      0                   1a9ba0507ea0c       kube-ingress-dns-minikube
	027d4b1ca4553       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                             5 minutes ago            Running             coredns                   0                   4563deae99fd8       coredns-66bc5c9577-84mdw
	8f35358fdd20e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner       0                   e0e5e03d99d56       storage-provisioner
	3c898a5491661       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                             6 minutes ago            Running             kube-proxy                0                   7d54b5cda3a51       kube-proxy-cd2p6
	d982439c4568a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                             6 minutes ago            Running             kindnet-cni               0                   751042a91dcd5       kindnet-gqwn9
	f77a8124262fd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                             6 minutes ago            Running             etcd                      0                   e3e065ae2df6e       etcd-addons-157757
	8823a8b70499d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                             6 minutes ago            Running             kube-controller-manager   0                   b3a14ec0d250b       kube-controller-manager-addons-157757
	3883cc6b72321       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                             6 minutes ago            Running             kube-scheduler            0                   1da03f9148575       kube-scheduler-addons-157757
	c1b0250d164ee       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                             6 minutes ago            Running             kube-apiserver            0                   b7fcf870719ec       kube-apiserver-addons-157757
	
	
	==> coredns [027d4b1ca455389765748e823ac8a45f27575f5dfceabb0b07ee96a2a7caf7c1] <==
	[INFO] 10.244.0.15:36503 - 64880 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00283675s
	[INFO] 10.244.0.15:36503 - 37201 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000157835s
	[INFO] 10.244.0.15:36503 - 58026 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000128157s
	[INFO] 10.244.0.15:46389 - 50962 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000190196s
	[INFO] 10.244.0.15:46389 - 51225 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000120846s
	[INFO] 10.244.0.15:40839 - 22394 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000203652s
	[INFO] 10.244.0.15:40839 - 22205 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00023333s
	[INFO] 10.244.0.15:42127 - 39269 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105059s
	[INFO] 10.244.0.15:42127 - 39080 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090289s
	[INFO] 10.244.0.15:36136 - 43344 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001404803s
	[INFO] 10.244.0.15:36136 - 43131 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001488176s
	[INFO] 10.244.0.15:54702 - 43164 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000141588s
	[INFO] 10.244.0.15:54702 - 42934 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160484s
	[INFO] 10.244.0.21:43016 - 37255 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000310295s
	[INFO] 10.244.0.21:50477 - 29964 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000419759s
	[INFO] 10.244.0.21:60735 - 29207 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000249855s
	[INFO] 10.244.0.21:54473 - 24511 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000251791s
	[INFO] 10.244.0.21:55936 - 36052 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000172447s
	[INFO] 10.244.0.21:44083 - 34558 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000147192s
	[INFO] 10.244.0.21:34805 - 46543 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001648843s
	[INFO] 10.244.0.21:48505 - 6120 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002372488s
	[INFO] 10.244.0.21:53833 - 13096 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002045652s
	[INFO] 10.244.0.21:36540 - 48684 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002585584s
	[INFO] 10.244.0.23:32811 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000166549s
	[INFO] 10.244.0.23:60057 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126261s
	
	
	==> describe nodes <==
	Name:               addons-157757
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-157757
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de12e0f54d226aca16c1f78311795f5ec99c1492
	                    minikube.k8s.io/name=addons-157757
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_01T18_33_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-157757
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Oct 2025 18:33:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-157757
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Oct 2025 18:39:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Oct 2025 18:37:31 +0000   Wed, 01 Oct 2025 18:33:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Oct 2025 18:37:31 +0000   Wed, 01 Oct 2025 18:33:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Oct 2025 18:37:31 +0000   Wed, 01 Oct 2025 18:33:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Oct 2025 18:37:31 +0000   Wed, 01 Oct 2025 18:34:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-157757
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 70e0ce0446db4e4c8e24a48fa9e76ee3
	  System UUID:                b159fb71-fd72-4e31-97fc-b9fedb4e92f2
	  Boot ID:                    51f8feb8-87ca-412f-9e3b-3711f0b1f6a5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  default                     hello-world-app-5d498dc89-ctxzj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-g9vlb                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  headlamp                    headlamp-85f8f8dc54-p48kw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-f2kn7    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m6s
	  kube-system                 coredns-66bc5c9577-84mdw                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m9s
	  kube-system                 etcd-addons-157757                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m17s
	  kube-system                 kindnet-gqwn9                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m9s
	  kube-system                 kube-apiserver-addons-157757                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-controller-manager-addons-157757       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-proxy-cd2p6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-scheduler-addons-157757                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m5s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  6m24s (x8 over 6m24s)  kubelet          Node addons-157757 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m24s (x8 over 6m24s)  kubelet          Node addons-157757 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m24s (x8 over 6m24s)  kubelet          Node addons-157757 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m17s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m17s                  kubelet          Node addons-157757 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m17s                  kubelet          Node addons-157757 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m17s                  kubelet          Node addons-157757 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m12s                  node-controller  Node addons-157757 event: Registered Node addons-157757 in Controller
	  Normal   NodeReady                5m26s                  kubelet          Node addons-157757 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 1 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015655] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.519694] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034329] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761925] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.736328] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 1 17:30] hrtimer: interrupt took 17924701 ns
	[Oct 1 18:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [f77a8124262fdbe98f993293067a386147376225c84d5833aba3f822f93af369] <==
	{"level":"warn","ts":"2025-10-01T18:33:11.946390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:11.951537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:11.975806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:11.994852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:12.050101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:12.064062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:12.083274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:12.134530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:23.551815Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.728373ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040344479036533 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/edit\" mod_revision:356 > success:<request_put:<key:\"/registry/clusterroles/edit\" value_size:3634 >> failure:<request_range:<key:\"/registry/clusterroles/edit\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-01T18:33:23.551997Z","caller":"traceutil/trace.go:172","msg":"trace[845822899] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"120.870758ms","start":"2025-10-01T18:33:23.431114Z","end":"2025-10-01T18:33:23.551985Z","steps":["trace[845822899] 'process raft request'  (duration: 120.801803ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:33:23.552311Z","caller":"traceutil/trace.go:172","msg":"trace[1071913062] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"192.957749ms","start":"2025-10-01T18:33:23.359341Z","end":"2025-10-01T18:33:23.552299Z","steps":["trace[1071913062] 'process raft request'  (duration: 47.998488ms)","trace[1071913062] 'compare'  (duration: 139.645198ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-01T18:33:23.552426Z","caller":"traceutil/trace.go:172","msg":"trace[1876068276] linearizableReadLoop","detail":"{readStateIndex:370; appliedIndex:369; }","duration":"144.812422ms","start":"2025-10-01T18:33:23.407606Z","end":"2025-10-01T18:33:23.552419Z","steps":["trace[1876068276] 'read index received'  (duration: 23.645246ms)","trace[1876068276] 'applied index is now lower than readState.Index'  (duration: 121.166381ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-01T18:33:23.552653Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.036422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-10-01T18:33:23.552684Z","caller":"traceutil/trace.go:172","msg":"trace[710973554] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:363; }","duration":"145.074682ms","start":"2025-10-01T18:33:23.407602Z","end":"2025-10-01T18:33:23.552677Z","steps":["trace[710973554] 'agreement among raft nodes before linearized reading'  (duration: 144.97101ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-01T18:33:23.553201Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.949435ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-01T18:33:23.632535Z","caller":"traceutil/trace.go:172","msg":"trace[1224377734] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:363; }","duration":"201.273204ms","start":"2025-10-01T18:33:23.431239Z","end":"2025-10-01T18:33:23.632513Z","steps":["trace[1224377734] 'agreement among raft nodes before linearized reading'  (duration: 121.926306ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:33:23.591749Z","caller":"traceutil/trace.go:172","msg":"trace[1950191618] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"104.402137ms","start":"2025-10-01T18:33:23.487331Z","end":"2025-10-01T18:33:23.591734Z","steps":["trace[1950191618] 'process raft request'  (duration: 104.317829ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:33:23.591792Z","caller":"traceutil/trace.go:172","msg":"trace[1990956624] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"104.601562ms","start":"2025-10-01T18:33:23.487186Z","end":"2025-10-01T18:33:23.591788Z","steps":["trace[1990956624] 'process raft request'  (duration: 66.083812ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-01T18:33:23.668736Z","caller":"traceutil/trace.go:172","msg":"trace[64100526] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"157.604189ms","start":"2025-10-01T18:33:23.511115Z","end":"2025-10-01T18:33:23.668719Z","steps":["trace[64100526] 'process raft request'  (duration: 144.554461ms)","trace[64100526] 'compare'  (duration: 12.59354ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-01T18:33:27.225384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:27.299952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:50.118091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:50.133399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:50.200223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:33:50.215799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52338","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:39:32 up  1:22,  0 users,  load average: 0.95, 1.47, 2.43
	Linux addons-157757 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d982439c4568aa251e191d6e4412c1107cb20d4771ecb9c82a6e6ccafbc28ca1] <==
	I1001 18:37:25.928449       1 main.go:301] handling current node
	I1001 18:37:35.921127       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:37:35.921159       1 main.go:301] handling current node
	I1001 18:37:45.925169       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:37:45.925204       1 main.go:301] handling current node
	I1001 18:37:55.925510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:37:55.925546       1 main.go:301] handling current node
	I1001 18:38:05.921063       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:38:05.921102       1 main.go:301] handling current node
	I1001 18:38:15.928965       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:38:15.928999       1 main.go:301] handling current node
	I1001 18:38:25.925315       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:38:25.925423       1 main.go:301] handling current node
	I1001 18:38:35.924125       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:38:35.924162       1 main.go:301] handling current node
	I1001 18:38:45.922062       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:38:45.922214       1 main.go:301] handling current node
	I1001 18:38:55.925528       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:38:55.925565       1 main.go:301] handling current node
	I1001 18:39:05.925330       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:39:05.925443       1 main.go:301] handling current node
	I1001 18:39:15.928250       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:39:15.928282       1 main.go:301] handling current node
	I1001 18:39:25.926880       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:39:25.926990       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c1b0250d164ee4c00c2a502e107623335845a861cfc7d0e585ede06b82f1834f] <==
	I1001 18:34:50.592288       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1001 18:34:50.603831       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1001 18:35:39.416047       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38484: use of closed network connection
	E1001 18:35:39.680656       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38506: use of closed network connection
	E1001 18:35:39.838499       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38522: use of closed network connection
	I1001 18:36:16.213175       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1001 18:36:36.891683       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1001 18:36:38.889888       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 18:36:38.895812       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 18:36:38.930522       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 18:36:38.931632       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 18:36:38.947498       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 18:36:38.947658       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 18:36:38.953579       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 18:36:38.954674       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 18:36:39.077951       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 18:36:39.078674       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1001 18:36:39.949457       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1001 18:36:40.083428       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1001 18:36:40.089588       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1001 18:36:53.599417       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.199.205"}
	I1001 18:37:10.727400       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1001 18:37:11.227608       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.116.240"}
	I1001 18:37:51.529469       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1001 18:39:30.900432       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.234.196"}
	
	
	==> kube-controller-manager [8823a8b70499d123da7fd7c2e2badd42c6658a72229edf5f42f0e74f4602c232] <==
	E1001 18:37:01.148998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:37:09.934342       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:37:09.936455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:37:13.685445       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:37:13.686581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:37:18.975502       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:37:18.976653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:37:47.547890       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:37:47.548953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:37:51.516816       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:37:51.517855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:37:57.657934       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:37:57.658971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:38:29.921333       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:38:29.922450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:38:35.247280       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:38:35.248408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:38:36.594962       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:38:36.595931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:39:15.920462       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:39:15.921678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:39:25.219312       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:39:25.220364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1001 18:39:25.832839       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1001 18:39:25.833896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [3c898a54916616b6d0f85d1f06b4113b8fc698d3d00d398c9f4cd4aaeafe5081] <==
	I1001 18:33:26.519733       1 server_linux.go:53] "Using iptables proxy"
	I1001 18:33:26.647625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1001 18:33:26.758771       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1001 18:33:26.758845       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1001 18:33:26.758930       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 18:33:27.354616       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1001 18:33:27.354747       1 server_linux.go:132] "Using iptables Proxier"
	I1001 18:33:27.364981       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 18:33:27.365360       1 server.go:527] "Version info" version="v1.34.1"
	I1001 18:33:27.365585       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:33:27.378278       1 config.go:200] "Starting service config controller"
	I1001 18:33:27.378419       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1001 18:33:27.379329       1 config.go:106] "Starting endpoint slice config controller"
	I1001 18:33:27.380895       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1001 18:33:27.381003       1 config.go:403] "Starting serviceCIDR config controller"
	I1001 18:33:27.381044       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1001 18:33:27.389092       1 config.go:309] "Starting node config controller"
	I1001 18:33:27.389180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1001 18:33:27.389213       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1001 18:33:27.479018       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1001 18:33:27.482702       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1001 18:33:27.482710       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3883cc6b72321b3fca258b9cf255971abd9c8b7468fbe602bc5b8084201e02ae] <==
	I1001 18:33:14.013823       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:33:14.016219       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:33:14.016269       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:33:14.018028       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1001 18:33:14.018381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1001 18:33:14.018433       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1001 18:33:14.021276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1001 18:33:14.027325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1001 18:33:14.027490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1001 18:33:14.027535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1001 18:33:14.027568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1001 18:33:14.027604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1001 18:33:14.027639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1001 18:33:14.027676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1001 18:33:14.027708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1001 18:33:14.032401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1001 18:33:14.032574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1001 18:33:14.032667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1001 18:33:14.032763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1001 18:33:14.034850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1001 18:33:14.035035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1001 18:33:14.035373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1001 18:33:14.035534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1001 18:33:14.035551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1001 18:33:15.016774       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 01 18:38:56 addons-157757 kubelet[1542]: E1001 18:38:56.145076    1542 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759343936144485688 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598614} inodes_used:{value:225}}"
	Oct 01 18:38:56 addons-157757 kubelet[1542]: E1001 18:38:56.145116    1542 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759343936144485688 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598614} inodes_used:{value:225}}"
	Oct 01 18:38:58 addons-157757 kubelet[1542]: E1001 18:38:58.974066    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a2da395c5bf509749e91ceba884d498d3536ff6da28a9695b30f96a12bc860c3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a2da395c5bf509749e91ceba884d498d3536ff6da28a9695b30f96a12bc860c3/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:06 addons-157757 kubelet[1542]: E1001 18:39:06.147533    1542 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759343946147250409 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598614} inodes_used:{value:225}}"
	Oct 01 18:39:06 addons-157757 kubelet[1542]: E1001 18:39:06.147574    1542 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759343946147250409 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598614} inodes_used:{value:225}}"
	Oct 01 18:39:07 addons-157757 kubelet[1542]: E1001 18:39:07.331806    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/39e998fe2e4c623d2029424440fb976503c62633f956b416e4a9c539644b828a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/39e998fe2e4c623d2029424440fb976503c62633f956b416e4a9c539644b828a/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:07 addons-157757 kubelet[1542]: E1001 18:39:07.459809    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fd621322121640f3534dc0a1c005a758bacc26491e9a84c7395564fe03f3b72f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fd621322121640f3534dc0a1c005a758bacc26491e9a84c7395564fe03f3b72f/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:12 addons-157757 kubelet[1542]: E1001 18:39:12.210844    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/998cdeb26b057c1c1c9b735a4f8ac0c3253663aa6e02913bb82f4bd6726cd39b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/998cdeb26b057c1c1c9b735a4f8ac0c3253663aa6e02913bb82f4bd6726cd39b/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:13 addons-157757 kubelet[1542]: I1001 18:39:13.741207    1542 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 18:39:15 addons-157757 kubelet[1542]: E1001 18:39:15.852430    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a2da395c5bf509749e91ceba884d498d3536ff6da28a9695b30f96a12bc860c3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a2da395c5bf509749e91ceba884d498d3536ff6da28a9695b30f96a12bc860c3/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:15 addons-157757 kubelet[1542]: E1001 18:39:15.856202    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/998cdeb26b057c1c1c9b735a4f8ac0c3253663aa6e02913bb82f4bd6726cd39b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/998cdeb26b057c1c1c9b735a4f8ac0c3253663aa6e02913bb82f4bd6726cd39b/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:15 addons-157757 kubelet[1542]: E1001 18:39:15.863253    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/dbe5f7ae52abded8735d1f34faa6d1823489d4100dcf665c8c4456f9efecbefe/diff" to get inode usage: stat /var/lib/containers/storage/overlay/dbe5f7ae52abded8735d1f34faa6d1823489d4100dcf665c8c4456f9efecbefe/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:15 addons-157757 kubelet[1542]: E1001 18:39:15.868106    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/81a22af00c0f84d78e19cd4c684d60f442c8255ddff938e3a991c82f7d27db7f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/81a22af00c0f84d78e19cd4c684d60f442c8255ddff938e3a991c82f7d27db7f/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:15 addons-157757 kubelet[1542]: E1001 18:39:15.868119    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/dbe5f7ae52abded8735d1f34faa6d1823489d4100dcf665c8c4456f9efecbefe/diff" to get inode usage: stat /var/lib/containers/storage/overlay/dbe5f7ae52abded8735d1f34faa6d1823489d4100dcf665c8c4456f9efecbefe/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:15 addons-157757 kubelet[1542]: E1001 18:39:15.868300    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/81a22af00c0f84d78e19cd4c684d60f442c8255ddff938e3a991c82f7d27db7f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/81a22af00c0f84d78e19cd4c684d60f442c8255ddff938e3a991c82f7d27db7f/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:15 addons-157757 kubelet[1542]: E1001 18:39:15.897807    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3f7ad4cc54da7b293c77943794e6bc9481e521ba6f9a74bf6598759723f6e70c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3f7ad4cc54da7b293c77943794e6bc9481e521ba6f9a74bf6598759723f6e70c/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:15 addons-157757 kubelet[1542]: E1001 18:39:15.920717    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3f7ad4cc54da7b293c77943794e6bc9481e521ba6f9a74bf6598759723f6e70c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3f7ad4cc54da7b293c77943794e6bc9481e521ba6f9a74bf6598759723f6e70c/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:15 addons-157757 kubelet[1542]: E1001 18:39:15.926263    1542 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fd621322121640f3534dc0a1c005a758bacc26491e9a84c7395564fe03f3b72f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fd621322121640f3534dc0a1c005a758bacc26491e9a84c7395564fe03f3b72f/diff: no such file or directory, extraDiskErr: <nil>
	Oct 01 18:39:16 addons-157757 kubelet[1542]: E1001 18:39:16.150477    1542 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759343956150009214 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598614} inodes_used:{value:225}}"
	Oct 01 18:39:16 addons-157757 kubelet[1542]: E1001 18:39:16.150518    1542 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759343956150009214 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598614} inodes_used:{value:225}}"
	Oct 01 18:39:26 addons-157757 kubelet[1542]: E1001 18:39:26.153031    1542 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759343966152764132 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598614} inodes_used:{value:225}}"
	Oct 01 18:39:26 addons-157757 kubelet[1542]: E1001 18:39:26.153072    1542 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759343966152764132 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:598614} inodes_used:{value:225}}"
	Oct 01 18:39:30 addons-157757 kubelet[1542]: I1001 18:39:30.782615    1542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ddkc\" (UniqueName: \"kubernetes.io/projected/6d133829-fc78-4f3c-a82d-b6336753d327-kube-api-access-7ddkc\") pod \"hello-world-app-5d498dc89-ctxzj\" (UID: \"6d133829-fc78-4f3c-a82d-b6336753d327\") " pod="default/hello-world-app-5d498dc89-ctxzj"
	Oct 01 18:39:31 addons-157757 kubelet[1542]: W1001 18:39:31.080094    1542 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/44a9a4951ad6a86227aa1cd3c9bfca87327cdd08180e3cfe242f227318753879/crio-d5cd9457104692dbbe4612f9434b5ff3371c4f91914ef5f932c204288c593db3 WatchSource:0}: Error finding container d5cd9457104692dbbe4612f9434b5ff3371c4f91914ef5f932c204288c593db3: Status 404 returned error can't find the container with id d5cd9457104692dbbe4612f9434b5ff3371c4f91914ef5f932c204288c593db3
	Oct 01 18:39:32 addons-157757 kubelet[1542]: I1001 18:39:32.516142    1542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-ctxzj" podStartSLOduration=1.6347357649999998 podStartE2EDuration="2.516124843s" podCreationTimestamp="2025-10-01 18:39:30 +0000 UTC" firstStartedPulling="2025-10-01 18:39:31.083490947 +0000 UTC m=+375.486258883" lastFinishedPulling="2025-10-01 18:39:31.964880025 +0000 UTC m=+376.367647961" observedRunningTime="2025-10-01 18:39:32.515244834 +0000 UTC m=+376.918012795" watchObservedRunningTime="2025-10-01 18:39:32.516124843 +0000 UTC m=+376.918892795"
	
	
	==> storage-provisioner [8f35358fdd20e6cf253f9be1a1115f2f3859e34344c130a5619799cd6048d7f8] <==
	W1001 18:39:07.460576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:09.463761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:09.470138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:11.473892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:11.479498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:13.483031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:13.491941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:15.495034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:15.501569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:17.504931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:17.509499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:19.512501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:19.517804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:21.520915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:21.527785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:23.531075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:23.535723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:25.539234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:25.544059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:27.547375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:27.552353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:29.555233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:29.562070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:31.566007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:39:31.570241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-157757 -n addons-157757
helpers_test.go:269: (dbg) Run:  kubectl --context addons-157757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-wqtmw ingress-nginx-admission-patch-gfhzm
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-157757 describe pod ingress-nginx-admission-create-wqtmw ingress-nginx-admission-patch-gfhzm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-157757 describe pod ingress-nginx-admission-create-wqtmw ingress-nginx-admission-patch-gfhzm: exit status 1 (106.832886ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wqtmw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gfhzm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-157757 describe pod ingress-nginx-admission-create-wqtmw ingress-nginx-admission-patch-gfhzm: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-157757 addons disable ingress-dns --alsologtostderr -v=1: (1.460088863s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-157757 addons disable ingress --alsologtostderr -v=1: (7.79987762s)
--- FAIL: TestAddons/parallel/Ingress (153.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-246462 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-246462 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9dbdh" [6b6b185f-4bbc-48a2-938c-f55645ccd36b] Pending
helpers_test.go:352: "hello-node-connect-7d85dfc575-9dbdh" [6b6b185f-4bbc-48a2-938c-f55645ccd36b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-246462 -n functional-246462
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-01 18:53:34.769924512 +0000 UTC m=+1348.969453061
functional_test.go:1645: (dbg) Run:  kubectl --context functional-246462 describe po hello-node-connect-7d85dfc575-9dbdh -n default
functional_test.go:1645: (dbg) kubectl --context functional-246462 describe po hello-node-connect-7d85dfc575-9dbdh -n default:
Name:             hello-node-connect-7d85dfc575-9dbdh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-246462/192.168.49.2
Start Time:       Wed, 01 Oct 2025 18:43:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jth7c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jth7c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9dbdh to functional-246462
Normal   Pulling    6m59s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m59s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m59s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-246462 logs hello-node-connect-7d85dfc575-9dbdh -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-246462 logs hello-node-connect-7d85dfc575-9dbdh -n default: exit status 1 (102.457267ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9dbdh" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-246462 logs hello-node-connect-7d85dfc575-9dbdh -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-246462 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-9dbdh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-246462/192.168.49.2
Start Time:       Wed, 01 Oct 2025 18:43:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jth7c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jth7c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9dbdh to functional-246462
Normal   Pulling    7m (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m57s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m57s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-246462 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-246462 logs -l app=hello-node-connect: exit status 1 (89.223932ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9dbdh" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-246462 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-246462 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.203.100
IPs:                      10.102.203.100
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31629/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-246462
helpers_test.go:243: (dbg) docker inspect functional-246462:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5772678338b65083ce0d90376854b881d1a0c4c4a4493ec0c64d956b3e82d42",
	        "Created": "2025-10-01T18:40:49.121431996Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308195,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-01T18:40:49.179870717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/e5772678338b65083ce0d90376854b881d1a0c4c4a4493ec0c64d956b3e82d42/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5772678338b65083ce0d90376854b881d1a0c4c4a4493ec0c64d956b3e82d42/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5772678338b65083ce0d90376854b881d1a0c4c4a4493ec0c64d956b3e82d42/hosts",
	        "LogPath": "/var/lib/docker/containers/e5772678338b65083ce0d90376854b881d1a0c4c4a4493ec0c64d956b3e82d42/e5772678338b65083ce0d90376854b881d1a0c4c4a4493ec0c64d956b3e82d42-json.log",
	        "Name": "/functional-246462",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-246462:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-246462",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5772678338b65083ce0d90376854b881d1a0c4c4a4493ec0c64d956b3e82d42",
	                "LowerDir": "/var/lib/docker/overlay2/f9ebce55d1277e51ca2dd12bd904e1c1ae9222618732989f432ced65cdeb2fda-init/diff:/var/lib/docker/overlay2/346fb2e4be8ca49e66f0777a766be9ef323e3747b8e386ae9882fb8153286814/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9ebce55d1277e51ca2dd12bd904e1c1ae9222618732989f432ced65cdeb2fda/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9ebce55d1277e51ca2dd12bd904e1c1ae9222618732989f432ced65cdeb2fda/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9ebce55d1277e51ca2dd12bd904e1c1ae9222618732989f432ced65cdeb2fda/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-246462",
	                "Source": "/var/lib/docker/volumes/functional-246462/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-246462",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-246462",
	                "name.minikube.sigs.k8s.io": "functional-246462",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7d1dd2d82c26f90bd0943970e74776778c2e29eafa1f9bd3db1bbd2bdd8cf86",
	            "SandboxKey": "/var/run/docker/netns/a7d1dd2d82c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-246462": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ee:1e:2c:c5:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae16ee229acf519029f72317d2d4e16788f42db171d2e6f19741cf58d32d9585",
	                    "EndpointID": "9b479bb4ff4dc40c254aa8e22ebafa886bf355fe5e084a4e08127cef5f178411",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-246462",
	                        "e5772678338b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-246462 -n functional-246462
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-246462 logs -n 25: (1.715299732s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-246462 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:42 UTC │ 01 Oct 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 01 Oct 25 18:42 UTC │ 01 Oct 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 01 Oct 25 18:42 UTC │ 01 Oct 25 18:42 UTC │
	│ kubectl │ functional-246462 kubectl -- --context functional-246462 get pods                                                          │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:42 UTC │ 01 Oct 25 18:42 UTC │
	│ start   │ -p functional-246462 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:42 UTC │ 01 Oct 25 18:43 UTC │
	│ service │ invalid-svc -p functional-246462                                                                                           │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │                     │
	│ cp      │ functional-246462 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ config  │ functional-246462 config unset cpus                                                                                        │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ config  │ functional-246462 config get cpus                                                                                          │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │                     │
	│ config  │ functional-246462 config set cpus 2                                                                                        │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ config  │ functional-246462 config get cpus                                                                                          │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ config  │ functional-246462 config unset cpus                                                                                        │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ ssh     │ functional-246462 ssh -n functional-246462 sudo cat /home/docker/cp-test.txt                                               │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ config  │ functional-246462 config get cpus                                                                                          │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │                     │
	│ ssh     │ functional-246462 ssh echo hello                                                                                           │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ cp      │ functional-246462 cp functional-246462:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2763953101/001/cp-test.txt │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ ssh     │ functional-246462 ssh cat /etc/hostname                                                                                    │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ tunnel  │ functional-246462 tunnel --alsologtostderr                                                                                 │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │                     │
	│ tunnel  │ functional-246462 tunnel --alsologtostderr                                                                                 │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │                     │
	│ ssh     │ functional-246462 ssh -n functional-246462 sudo cat /home/docker/cp-test.txt                                               │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ cp      │ functional-246462 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ tunnel  │ functional-246462 tunnel --alsologtostderr                                                                                 │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │                     │
	│ ssh     │ functional-246462 ssh -n functional-246462 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ addons  │ functional-246462 addons list                                                                                              │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	│ addons  │ functional-246462 addons list -o json                                                                                      │ functional-246462 │ jenkins │ v1.37.0 │ 01 Oct 25 18:43 UTC │ 01 Oct 25 18:43 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/01 18:42:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:42:39.476742  312953 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:42:39.476905  312953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:42:39.476910  312953 out.go:374] Setting ErrFile to fd 2...
	I1001 18:42:39.476913  312953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:42:39.477167  312953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
	I1001 18:42:39.477519  312953 out.go:368] Setting JSON to false
	I1001 18:42:39.478432  312953 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5112,"bootTime":1759339048,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1001 18:42:39.478486  312953 start.go:140] virtualization:  
	I1001 18:42:39.482052  312953 out.go:179] * [functional-246462] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1001 18:42:39.486001  312953 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 18:42:39.486119  312953 notify.go:220] Checking for updates...
	I1001 18:42:39.491918  312953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:42:39.494924  312953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	I1001 18:42:39.497871  312953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	I1001 18:42:39.500842  312953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 18:42:39.503826  312953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:42:39.507147  312953 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:42:39.507242  312953 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 18:42:39.545087  312953 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1001 18:42:39.545188  312953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 18:42:39.612653  312953 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-01 18:42:39.603233033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 18:42:39.612749  312953 docker.go:318] overlay module found
	I1001 18:42:39.615749  312953 out.go:179] * Using the docker driver based on existing profile
	I1001 18:42:39.618768  312953 start.go:304] selected driver: docker
	I1001 18:42:39.618775  312953 start.go:921] validating driver "docker" against &{Name:functional-246462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-246462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:42:39.618897  312953 start.go:932] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 18:42:39.619000  312953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 18:42:39.674905  312953 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-01 18:42:39.665849612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 18:42:39.675317  312953 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:42:39.675340  312953 cni.go:84] Creating CNI manager for ""
	I1001 18:42:39.675397  312953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 18:42:39.675449  312953 start.go:348] cluster config:
	{Name:functional-246462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-246462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:42:39.678569  312953 out.go:179] * Starting "functional-246462" primary control-plane node in "functional-246462" cluster
	I1001 18:42:39.681439  312953 cache.go:123] Beginning downloading kic base image for docker with crio
	I1001 18:42:39.684421  312953 out.go:179] * Pulling base image v0.0.48 ...
	I1001 18:42:39.687245  312953 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:42:39.687310  312953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I1001 18:42:39.687342  312953 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21631-288146/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1001 18:42:39.687356  312953 cache.go:58] Caching tarball of preloaded images
	I1001 18:42:39.687482  312953 preload.go:233] Found /home/jenkins/minikube-integration/21631-288146/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1001 18:42:39.687488  312953 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1001 18:42:39.687597  312953 profile.go:143] Saving config to /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/config.json ...
	I1001 18:42:39.705901  312953 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I1001 18:42:39.705912  312953 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I1001 18:42:39.705940  312953 cache.go:232] Successfully downloaded all kic artifacts
	I1001 18:42:39.705967  312953 start.go:360] acquireMachinesLock for functional-246462: {Name:mk2cd3d3aaf41292b7a8d5afd03ad96f0ad9cf15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 18:42:39.706034  312953 start.go:364] duration metric: took 46.343µs to acquireMachinesLock for "functional-246462"
	I1001 18:42:39.706054  312953 start.go:96] Skipping create...Using existing machine configuration
	I1001 18:42:39.706058  312953 fix.go:54] fixHost starting: 
	I1001 18:42:39.706350  312953 cli_runner.go:164] Run: docker container inspect functional-246462 --format={{.State.Status}}
	I1001 18:42:39.723287  312953 fix.go:112] recreateIfNeeded on functional-246462: state=Running err=<nil>
	W1001 18:42:39.723305  312953 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 18:42:39.726868  312953 out.go:252] * Updating the running docker "functional-246462" container ...
	I1001 18:42:39.726894  312953 machine.go:93] provisionDockerMachine start ...
	I1001 18:42:39.726986  312953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
	I1001 18:42:39.744647  312953 main.go:141] libmachine: Using SSH client type: native
	I1001 18:42:39.745101  312953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33151 <nil> <nil>}
	I1001 18:42:39.745109  312953 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 18:42:39.886278  312953 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-246462
	
	I1001 18:42:39.886293  312953 ubuntu.go:182] provisioning hostname "functional-246462"
	I1001 18:42:39.886356  312953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
	I1001 18:42:39.906658  312953 main.go:141] libmachine: Using SSH client type: native
	I1001 18:42:39.907015  312953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33151 <nil> <nil>}
	I1001 18:42:39.907025  312953 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-246462 && echo "functional-246462" | sudo tee /etc/hostname
	I1001 18:42:40.074447  312953 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-246462
	
	I1001 18:42:40.074534  312953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
	I1001 18:42:40.095532  312953 main.go:141] libmachine: Using SSH client type: native
	I1001 18:42:40.095859  312953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33151 <nil> <nil>}
	I1001 18:42:40.095874  312953 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-246462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-246462/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-246462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 18:42:40.239149  312953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 18:42:40.239166  312953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21631-288146/.minikube CaCertPath:/home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21631-288146/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21631-288146/.minikube}
	I1001 18:42:40.239188  312953 ubuntu.go:190] setting up certificates
	I1001 18:42:40.239198  312953 provision.go:84] configureAuth start
	I1001 18:42:40.239274  312953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-246462
	I1001 18:42:40.256763  312953 provision.go:143] copyHostCerts
	I1001 18:42:40.256832  312953 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-288146/.minikube/ca.pem, removing ...
	I1001 18:42:40.256850  312953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-288146/.minikube/ca.pem
	I1001 18:42:40.256928  312953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21631-288146/.minikube/ca.pem (1082 bytes)
	I1001 18:42:40.257071  312953 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-288146/.minikube/cert.pem, removing ...
	I1001 18:42:40.257075  312953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-288146/.minikube/cert.pem
	I1001 18:42:40.257101  312953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21631-288146/.minikube/cert.pem (1123 bytes)
	I1001 18:42:40.257149  312953 exec_runner.go:144] found /home/jenkins/minikube-integration/21631-288146/.minikube/key.pem, removing ...
	I1001 18:42:40.257152  312953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21631-288146/.minikube/key.pem
	I1001 18:42:40.257174  312953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21631-288146/.minikube/key.pem (1675 bytes)
	I1001 18:42:40.257216  312953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21631-288146/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca-key.pem org=jenkins.functional-246462 san=[127.0.0.1 192.168.49.2 functional-246462 localhost minikube]
	I1001 18:42:40.938594  312953 provision.go:177] copyRemoteCerts
	I1001 18:42:40.938646  312953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 18:42:40.938683  312953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
	I1001 18:42:40.956717  312953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
	I1001 18:42:41.055856  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 18:42:41.081428  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1001 18:42:41.105643  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 18:42:41.131263  312953 provision.go:87] duration metric: took 892.052238ms to configureAuth
	I1001 18:42:41.131280  312953 ubuntu.go:206] setting minikube options for container-runtime
	I1001 18:42:41.131481  312953 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:42:41.131588  312953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
	I1001 18:42:41.148686  312953 main.go:141] libmachine: Using SSH client type: native
	I1001 18:42:41.149000  312953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33151 <nil> <nil>}
	I1001 18:42:41.149012  312953 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 18:42:46.568296  312953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 18:42:46.568310  312953 machine.go:96] duration metric: took 6.841409844s to provisionDockerMachine
	I1001 18:42:46.568320  312953 start.go:293] postStartSetup for "functional-246462" (driver="docker")
	I1001 18:42:46.568330  312953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 18:42:46.568401  312953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 18:42:46.568438  312953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
	I1001 18:42:46.586092  312953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
	I1001 18:42:46.683682  312953 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 18:42:46.686688  312953 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1001 18:42:46.686711  312953 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1001 18:42:46.686720  312953 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1001 18:42:46.686725  312953 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1001 18:42:46.686734  312953 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-288146/.minikube/addons for local assets ...
	I1001 18:42:46.686809  312953 filesync.go:126] Scanning /home/jenkins/minikube-integration/21631-288146/.minikube/files for local assets ...
	I1001 18:42:46.686890  312953 filesync.go:149] local asset: /home/jenkins/minikube-integration/21631-288146/.minikube/files/etc/ssl/certs/2900162.pem -> 2900162.pem in /etc/ssl/certs
	I1001 18:42:46.686961  312953 filesync.go:149] local asset: /home/jenkins/minikube-integration/21631-288146/.minikube/files/etc/test/nested/copy/290016/hosts -> hosts in /etc/test/nested/copy/290016
	I1001 18:42:46.687001  312953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/290016
	I1001 18:42:46.695396  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/files/etc/ssl/certs/2900162.pem --> /etc/ssl/certs/2900162.pem (1708 bytes)
	I1001 18:42:46.720362  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/files/etc/test/nested/copy/290016/hosts --> /etc/test/nested/copy/290016/hosts (40 bytes)
	I1001 18:42:46.743257  312953 start.go:296] duration metric: took 174.922704ms for postStartSetup
	I1001 18:42:46.743327  312953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 18:42:46.743380  312953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
	I1001 18:42:46.760417  312953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
	I1001 18:42:46.856120  312953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1001 18:42:46.860835  312953 fix.go:56] duration metric: took 7.154768635s for fixHost
	I1001 18:42:46.860849  312953 start.go:83] releasing machines lock for "functional-246462", held for 7.154807987s
	I1001 18:42:46.860930  312953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-246462
	I1001 18:42:46.877416  312953 ssh_runner.go:195] Run: cat /version.json
	I1001 18:42:46.877458  312953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
	I1001 18:42:46.877725  312953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 18:42:46.877768  312953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
	I1001 18:42:46.896119  312953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
	I1001 18:42:46.906103  312953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
	I1001 18:42:46.990142  312953 ssh_runner.go:195] Run: systemctl --version
	I1001 18:42:47.122553  312953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 18:42:47.263640  312953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 18:42:47.268099  312953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 18:42:47.277175  312953 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1001 18:42:47.277265  312953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 18:42:47.286542  312953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 18:42:47.286556  312953 start.go:495] detecting cgroup driver to use...
	I1001 18:42:47.286589  312953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1001 18:42:47.286644  312953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 18:42:47.299569  312953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 18:42:47.311525  312953 docker.go:218] disabling cri-docker service (if available) ...
	I1001 18:42:47.311588  312953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 18:42:47.325092  312953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 18:42:47.336848  312953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 18:42:47.464232  312953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 18:42:47.587158  312953 docker.go:234] disabling docker service ...
	I1001 18:42:47.587256  312953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 18:42:47.600967  312953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 18:42:47.613145  312953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 18:42:47.733982  312953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 18:42:47.868664  312953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 18:42:47.882032  312953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 18:42:47.897664  312953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1001 18:42:47.897716  312953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:42:47.907508  312953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 18:42:47.907566  312953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:42:47.917345  312953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:42:47.926950  312953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:42:47.936392  312953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 18:42:47.945346  312953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:42:47.954978  312953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:42:47.964471  312953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 18:42:47.974082  312953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 18:42:47.982938  312953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 18:42:47.991524  312953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:42:48.122206  312953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 18:42:48.315785  312953 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 18:42:48.315847  312953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 18:42:48.319687  312953 start.go:563] Will wait 60s for crictl version
	I1001 18:42:48.319740  312953 ssh_runner.go:195] Run: which crictl
	I1001 18:42:48.323257  312953 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 18:42:48.373513  312953 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1001 18:42:48.373601  312953 ssh_runner.go:195] Run: crio --version
	I1001 18:42:48.412606  312953 ssh_runner.go:195] Run: crio --version
	I1001 18:42:48.454770  312953 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	I1001 18:42:48.457728  312953 cli_runner.go:164] Run: docker network inspect functional-246462 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 18:42:48.473524  312953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1001 18:42:48.480376  312953 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1001 18:42:48.483298  312953 kubeadm.go:875] updating cluster {Name:functional-246462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-246462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 18:42:48.483412  312953 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:42:48.483500  312953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:42:48.531568  312953 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 18:42:48.531578  312953 crio.go:433] Images already preloaded, skipping extraction
	I1001 18:42:48.531631  312953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 18:42:48.570919  312953 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 18:42:48.570932  312953 cache_images.go:85] Images are preloaded, skipping loading
	I1001 18:42:48.570938  312953 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1001 18:42:48.571054  312953 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-246462 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-246462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 18:42:48.571140  312953 ssh_runner.go:195] Run: crio config
	I1001 18:42:48.621984  312953 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1001 18:42:48.622006  312953 cni.go:84] Creating CNI manager for ""
	I1001 18:42:48.622014  312953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 18:42:48.622023  312953 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 18:42:48.622044  312953 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-246462 NodeName:functional-246462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 18:42:48.622159  312953 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-246462"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 18:42:48.622240  312953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1001 18:42:48.631304  312953 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 18:42:48.631362  312953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 18:42:48.640069  312953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1001 18:42:48.659283  312953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 18:42:48.677756  312953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1001 18:42:48.695847  312953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1001 18:42:48.699498  312953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:42:48.818281  312953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:42:48.830730  312953 certs.go:68] Setting up /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462 for IP: 192.168.49.2
	I1001 18:42:48.830740  312953 certs.go:194] generating shared ca certs ...
	I1001 18:42:48.830755  312953 certs.go:226] acquiring lock for ca certs: {Name:mke2b4e9b838c885b8b094f221acc5151872bc25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:42:48.830912  312953 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21631-288146/.minikube/ca.key
	I1001 18:42:48.830951  312953 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21631-288146/.minikube/proxy-client-ca.key
	I1001 18:42:48.830957  312953 certs.go:256] generating profile certs ...
	I1001 18:42:48.831037  312953 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.key
	I1001 18:42:48.831079  312953 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/apiserver.key.e1a04698
	I1001 18:42:48.831123  312953 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/proxy-client.key
	I1001 18:42:48.831235  312953 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/290016.pem (1338 bytes)
	W1001 18:42:48.831264  312953 certs.go:480] ignoring /home/jenkins/minikube-integration/21631-288146/.minikube/certs/290016_empty.pem, impossibly tiny 0 bytes
	I1001 18:42:48.831270  312953 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 18:42:48.831293  312953 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/ca.pem (1082 bytes)
	I1001 18:42:48.831313  312953 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/cert.pem (1123 bytes)
	I1001 18:42:48.831339  312953 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-288146/.minikube/certs/key.pem (1675 bytes)
	I1001 18:42:48.831377  312953 certs.go:484] found cert: /home/jenkins/minikube-integration/21631-288146/.minikube/files/etc/ssl/certs/2900162.pem (1708 bytes)
	I1001 18:42:48.831984  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 18:42:48.857901  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1001 18:42:48.881502  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 18:42:48.913857  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 18:42:48.952519  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 18:42:48.993281  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 18:42:49.026850  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 18:42:49.059195  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 18:42:49.096308  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 18:42:49.121149  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/certs/290016.pem --> /usr/share/ca-certificates/290016.pem (1338 bytes)
	I1001 18:42:49.146301  312953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21631-288146/.minikube/files/etc/ssl/certs/2900162.pem --> /usr/share/ca-certificates/2900162.pem (1708 bytes)
	I1001 18:42:49.169735  312953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 18:42:49.188022  312953 ssh_runner.go:195] Run: openssl version
	I1001 18:42:49.194610  312953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 18:42:49.204799  312953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:42:49.208118  312953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:42:49.208179  312953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 18:42:49.214985  312953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 18:42:49.224112  312953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/290016.pem && ln -fs /usr/share/ca-certificates/290016.pem /etc/ssl/certs/290016.pem"
	I1001 18:42:49.233514  312953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/290016.pem
	I1001 18:42:49.237187  312953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 18:40 /usr/share/ca-certificates/290016.pem
	I1001 18:42:49.237241  312953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/290016.pem
	I1001 18:42:49.244459  312953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/290016.pem /etc/ssl/certs/51391683.0"
	I1001 18:42:49.253712  312953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2900162.pem && ln -fs /usr/share/ca-certificates/2900162.pem /etc/ssl/certs/2900162.pem"
	I1001 18:42:49.263157  312953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2900162.pem
	I1001 18:42:49.266768  312953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 18:40 /usr/share/ca-certificates/2900162.pem
	I1001 18:42:49.266903  312953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2900162.pem
	I1001 18:42:49.273916  312953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2900162.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 18:42:49.283068  312953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 18:42:49.286499  312953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 18:42:49.293289  312953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 18:42:49.300410  312953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 18:42:49.307200  312953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 18:42:49.313877  312953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 18:42:49.320757  312953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 18:42:49.327512  312953 kubeadm.go:392] StartCluster: {Name:functional-246462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-246462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:42:49.327597  312953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 18:42:49.327668  312953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 18:42:49.367205  312953 cri.go:89] found id: "9938a357359632e9f70e081946151aae6d28a9ff1558228d0b39683674508989"
	I1001 18:42:49.367217  312953 cri.go:89] found id: "031471c29fe491ccc0a000195c586756b08cd6ca26c1ff8ee3a10fdf0d13799d"
	I1001 18:42:49.367220  312953 cri.go:89] found id: "676b0aa650d37940b183e0db13563041aa747e8a9bfb8c5d3c1dca57e8c863a7"
	I1001 18:42:49.367223  312953 cri.go:89] found id: "21fc8b63b185c753f5cf8a2adec583344b019c1faa7255d30c6a95fa54ac38f9"
	I1001 18:42:49.367225  312953 cri.go:89] found id: "e90141aa62b9bb79ff16ebca4945afba872156b02cc6c83e2c421d7a637dae3c"
	I1001 18:42:49.367227  312953 cri.go:89] found id: "61343dfb5481d51c31dd1240ee0c501be7cab6a31f11f0f0a933d07ed06f3554"
	I1001 18:42:49.367229  312953 cri.go:89] found id: "aa72452d845c09263cb61af321fd78c71e23ea084c71550e0b7f1bc9fcb3d4b1"
	I1001 18:42:49.367232  312953 cri.go:89] found id: "1bc4db2f245434176f5bfbf081177c66f2fec87e033dbc00e447e91f55369da4"
	I1001 18:42:49.367234  312953 cri.go:89] found id: ""
	I1001 18:42:49.367286  312953 ssh_runner.go:195] Run: sudo runc list -f json
	I1001 18:42:49.390180  312953 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"031471c29fe491ccc0a000195c586756b08cd6ca26c1ff8ee3a10fdf0d13799d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/031471c29fe491ccc0a000195c586756b08cd6ca26c1ff8ee3a10fdf0d13799d/userdata","rootfs":"/var/lib/containers/storage/overlay/ada660ab15d12fb4ce2214870358f2c5259ec32802018a78fb381731e1985a57/merged","created":"2025-10-01T18:42:14.099330784Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9c112505","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9c112505\",\"io.kubernetes.containe
r.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"031471c29fe491ccc0a000195c586756b08cd6ca26c1ff8ee3a10fdf0d13799d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-01T18:42:13.959522441Z","io.kubernetes.cri-o.Image":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri-o.ImageRef":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-246462\",\"i
o.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"809e4c88359fa4f9e95aa6939874071c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-246462_809e4c88359fa4f9e95aa6939874071c/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ada660ab15d12fb4ce2214870358f2c5259ec32802018a78fb381731e1985a57/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-246462_kube-system_809e4c88359fa4f9e95aa6939874071c_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/29be358a2b036b7672f996bb7c0deed650cd1fa2931e0549ba3d8967cbc17a5e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"29be358a2b036b7672f996bb7c0deed650cd1fa2931e0549ba3d8967cbc17a5e","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-246462_kube-system_809e4c88359fa4f9e95a
a6939874071c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/809e4c88359fa4f9e95aa6939874071c/containers/kube-controller-manager/00b5e25f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/809e4c88359fa4f9e95aa6939874071c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly
\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-246462","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"809e4c88359fa4f9e95aa6939874071c","kubernetes.io/config.hash":"809e4c88359fa4f9e95aa693987
4071c","kubernetes.io/config.seen":"2025-10-01T18:41:04.603363577Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1bc4db2f245434176f5bfbf081177c66f2fec87e033dbc00e447e91f55369da4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1bc4db2f245434176f5bfbf081177c66f2fec87e033dbc00e447e91f55369da4/userdata","rootfs":"/var/lib/containers/storage/overlay/cf9b6f90938b494f389bfb8d3e15ab9cc5943d35cabd7419fb9aee41edb9daca/merged","created":"2025-10-01T18:42:14.115776108Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"96651ac1","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"96651ac1\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"
/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1bc4db2f245434176f5bfbf081177c66f2fec87e033dbc00e447e91f55369da4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-01T18:42:13.808105709Z","io.kubernetes.cri-o.Image":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.34.1","io.kubernetes.cri-o.ImageRef":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-4h9qw\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"762650e3-d5fa-4b26-a594-4e8bcfc47fdb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-4h9qw_762650e3-d5fa-4b26-a594-4e8bcfc47fdb/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"nam
e\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cf9b6f90938b494f389bfb8d3e15ab9cc5943d35cabd7419fb9aee41edb9daca/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-4h9qw_kube-system_762650e3-d5fa-4b26-a594-4e8bcfc47fdb_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8cb8db5c04d3791bdfb62129946b785b51b9ca4d52f5c58f53026f56477ae711/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8cb8db5c04d3791bdfb62129946b785b51b9ca4d52f5c58f53026f56477ae711","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-4h9qw_kube-system_762650e3-d5fa-4b26-a594-4e8bcfc47fdb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\
",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/762650e3-d5fa-4b26-a594-4e8bcfc47fdb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/762650e3-d5fa-4b26-a594-4e8bcfc47fdb/containers/kube-proxy/073c8436\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/762650e3-d5fa-4b26-a594-4e8bcfc47fdb/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/762650e3-d5fa-4b26-a594-4e8bcfc47fdb/volumes/kubernetes.io~projected/kube-api-access-465nn\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-p
roxy-4h9qw","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"762650e3-d5fa-4b26-a594-4e8bcfc47fdb","kubernetes.io/config.seen":"2025-10-01T18:41:17.366022371Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"21fc8b63b185c753f5cf8a2adec583344b019c1faa7255d30c6a95fa54ac38f9","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/21fc8b63b185c753f5cf8a2adec583344b019c1faa7255d30c6a95fa54ac38f9/userdata","rootfs":"/var/lib/containers/storage/overlay/430fa39158ecabbcc1aad5042f09aa47d4b3f3c47100b9533f834033551a9527/merged","created":"2025-10-01T18:42:14.117564725Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"127fdb84","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.A
nnotations":"{\"io.kubernetes.container.hash\":\"127fdb84\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"21fc8b63b185c753f5cf8a2adec583344b019c1faa7255d30c6a95fa54ac38f9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-01T18:42:13.91324619Z","io.kubernetes.cri-o.Image":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri-o.ImageRef":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-9vmxg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"de80e7b4-16f6-4634-bb7a-d410f3f01bdc\"}"
,"io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-9vmxg_de80e7b4-16f6-4634-bb7a-d410f3f01bdc/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/430fa39158ecabbcc1aad5042f09aa47d4b3f3c47100b9533f834033551a9527/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-9vmxg_kube-system_de80e7b4-16f6-4634-bb7a-d410f3f01bdc_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/605dc2d08819a0dc3b51bde2e73eca3fcb391c4d56d26df150c4ad2f84dc4085/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"605dc2d08819a0dc3b51bde2e73eca3fcb391c4d56d26df150c4ad2f84dc4085","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-9vmxg_kube-system_de80e7b4-16f6-4634-bb7a-d410f3f01bdc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_pa
th\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/de80e7b4-16f6-4634-bb7a-d410f3f01bdc/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/de80e7b4-16f6-4634-bb7a-d410f3f01bdc/containers/kindnet-cni/b957b7e6\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/de80e7b4-16f6-4634-bb7a-d410f3f01bdc/volumes/kubernetes.io~projected/kube-api-access-dxg48\",\"readonly\":true,\"propa
gation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-9vmxg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"de80e7b4-16f6-4634-bb7a-d410f3f01bdc","kubernetes.io/config.seen":"2025-10-01T18:41:17.338262649Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"61343dfb5481d51c31dd1240ee0c501be7cab6a31f11f0f0a933d07ed06f3554","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/61343dfb5481d51c31dd1240ee0c501be7cab6a31f11f0f0a933d07ed06f3554/userdata","rootfs":"/var/lib/containers/storage/overlay/518459b34fd2e5802d32f767692346aa9ef1b04b55eb731a13b2eddc5c36c04a/merged","created":"2025-10-01T18:42:14.097840312Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d0cc63c7","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}]"
,"io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d0cc63c7\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8441,\\\"containerPort\\\":8441,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"61343dfb5481d51c31dd1240ee0c501be7cab6a31f11f0f0a933d07ed06f3554","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-01T18:42:13.855764762Z","io.kubernetes.cri-o.Image":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.
1","io.kubernetes.cri-o.ImageRef":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-246462\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9729f66a0a98a57f663dbe1ace4cf317\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-246462_9729f66a0a98a57f663dbe1ace4cf317/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/518459b34fd2e5802d32f767692346aa9ef1b04b55eb731a13b2eddc5c36c04a/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-246462_kube-system_9729f66a0a98a57f663dbe1ace4cf317_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e9871d02e8397cba234a793b5521c6d2ca3321a88301e4d2ebed84d4e5af2b99/userdata/resolv.conf","io.ku
bernetes.cri-o.SandboxID":"e9871d02e8397cba234a793b5521c6d2ca3321a88301e4d2ebed84d4e5af2b99","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-246462_kube-system_9729f66a0a98a57f663dbe1ace4cf317_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9729f66a0a98a57f663dbe1ace4cf317/containers/kube-apiserver/0a2c6fd0\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9729f66a0a98a57f663dbe1ace4cf317/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/c
a-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-246462","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9729f66a0a98a57f663dbe1ace4cf317","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"9729f66a0a98a57f663dbe1ace4cf317","kubernetes.io/config.seen":"2025-10-01T18:41:04.603361600Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVer
sion":"1.0.2-dev","id":"676b0aa650d37940b183e0db13563041aa747e8a9bfb8c5d3c1dca57e8c863a7","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/676b0aa650d37940b183e0db13563041aa747e8a9bfb8c5d3c1dca57e8c863a7/userdata","rootfs":"/var/lib/containers/storage/overlay/9f6bb263cf5c9e842f6638bf2d8180ae620bf813122f91801a147dee8549f18b/merged","created":"2025-10-01T18:42:14.118294318Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"af42bbeb","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"af42bbeb\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"con
tainerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"676b0aa650d37940b183e0db13563041aa747e8a9bfb8c5d3c1dca57e8c863a7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-01T18:42:13.946962203Z","io.kubernetes.cri-o.Image":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri-o.ImageRef":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-246462\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"83a962854b0a5dbbd9127c4be60fd0
61\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-246462_83a962854b0a5dbbd9127c4be60fd061/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9f6bb263cf5c9e842f6638bf2d8180ae620bf813122f91801a147dee8549f18b/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-246462_kube-system_83a962854b0a5dbbd9127c4be60fd061_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/875115c4733b7f61d4072506349676fccecf11f2916f3730aee70c3ac623f206/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"875115c4733b7f61d4072506349676fccecf11f2916f3730aee70c3ac623f206","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-246462_kube-system_83a962854b0a5dbbd9127c4be60fd061_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.
TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/83a962854b0a5dbbd9127c4be60fd061/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/83a962854b0a5dbbd9127c4be60fd061/containers/kube-scheduler/92d56992\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-246462","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"83a962854b0a5dbbd9127c4be60fd061","kubernetes.io/config.hash":"83a962854b0a5dbbd9127c4be60fd061","kubernetes.io/config.seen":"2025-10-01T18:41:04.603365046Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-d
ev","id":"9938a357359632e9f70e081946151aae6d28a9ff1558228d0b39683674508989","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9938a357359632e9f70e081946151aae6d28a9ff1558228d0b39683674508989/userdata","rootfs":"/var/lib/containers/storage/overlay/a1ec4c59aaf1064f4579c53efc69113d3bcc0f5f03a9c6b0278c327b53fd82fe/merged","created":"2025-10-01T18:42:14.097889338Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9bf792","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubern
etes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9bf792\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"liveness-probe\\\",\\\"containerPort\\\":8080,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"readiness-probe\\\",\\\"containerPort\\\":8181,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9938a357359632e9f70e081946151aae6d28a9ff1558228d0b39683674508989","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Crea
ted":"2025-10-01T18:42:13.992729524Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri-o.ImageRef":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bc5c9577-m4t97\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a2294a5e-be99-41a0-bdb1-0bf29937e2c8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bc5c9577-m4t97_a2294a5e-be99-41a0-bdb1-0bf29937e2c8/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a1ec4c59aaf1064f4579c53efc69113d3bcc0f5f03a9c6b0278c327b53fd82fe/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bc5c9577-m4t97_kube-system_a2294a
5e-be99-41a0-bdb1-0bf29937e2c8_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ce2b2d58d112469ab6060a50d9730c5a87c9089e326d0a4725cb03d020114fc3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ce2b2d58d112469ab6060a50d9730c5a87c9089e326d0a4725cb03d020114fc3","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bc5c9577-m4t97_kube-system_a2294a5e-be99-41a0-bdb1-0bf29937e2c8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/a2294a5e-be99-41a0-bdb1-0bf29937e2c8/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a2294a5e-be99-41a0-bdb1-0bf29937e2c8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_pat
h\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a2294a5e-be99-41a0-bdb1-0bf29937e2c8/containers/coredns/3c209343\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a2294a5e-be99-41a0-bdb1-0bf29937e2c8/volumes/kubernetes.io~projected/kube-api-access-gmqrm\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bc5c9577-m4t97","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a2294a5e-be99-41a0-bdb1-0bf29937e2c8","kubernetes.io/config.seen":"2025-10-01T18:42:00.509257166Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aa72452d845c09263cb61af321fd78c71e23ea084c71550e0b7f1bc9fcb3d4b1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/aa72452d845c09263cb61af321fd78c71e23ea084c71550e0b7f1bc9fcb3d
4b1/userdata","rootfs":"/var/lib/containers/storage/overlay/6f4ab3605d307bc09706e2e4d66ea08104a75b0a358cc1b24d8314720a5e7692/merged","created":"2025-10-01T18:42:14.116017514Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.k
ubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"aa72452d845c09263cb61af321fd78c71e23ea084c71550e0b7f1bc9fcb3d4b1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-10-01T18:42:13.826007839Z","io.kubernetes.cri-o.Image":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-246462\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a8ad2a5f007d06c8fe948983b6b56aac\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-246462_a8ad2a5f007d06c8fe948983b6b56aac/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6f4ab3
605d307bc09706e2e4d66ea08104a75b0a358cc1b24d8314720a5e7692/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-246462_kube-system_a8ad2a5f007d06c8fe948983b6b56aac_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/9b58fb5089f8ed5f2ffffd08b437aad86115b75d78ec2763324b5beea1f55b8d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9b58fb5089f8ed5f2ffffd08b437aad86115b75d78ec2763324b5beea1f55b8d","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-246462_kube-system_a8ad2a5f007d06c8fe948983b6b56aac_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a8ad2a5f007d06c8fe948983b6b56aac/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a8ad2a5f007d06c8fe948983b
6b56aac/containers/etcd/f89bf743\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-246462","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a8ad2a5f007d06c8fe948983b6b56aac","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"a8ad2a5f007d06c8fe948983b6b56aac","kubernetes.io/config.seen":"2025-10-01T18:41:04.603355634Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e90141aa62b9bb79ff16ebca4945afba872156b02cc6c83e2c421d7a637dae3c","pid":0,"status":"stopped","bundle":"/run/container
s/storage/overlay-containers/e90141aa62b9bb79ff16ebca4945afba872156b02cc6c83e2c421d7a637dae3c/userdata","rootfs":"/var/lib/containers/storage/overlay/bb7296d8cf48f16742539927a1d9e8fe26402e59b7aa1b82a5fbb81271c77e83/merged","created":"2025-10-01T18:42:14.152473205Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e90141aa62b9bb79ff16ebca4945afba872156b02cc6c83e2c421d7a637dae3c","io.kubernetes.cri-o.C
ontainerType":"container","io.kubernetes.cri-o.Created":"2025-10-01T18:42:13.861091458Z","io.kubernetes.cri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"86765e31-1cfa-45da-88b5-221ee9f60924\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_86765e31-1cfa-45da-88b5-221ee9f60924/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bb7296d8cf48f16742539927a1d9e8fe26402e59b7aa1b82a5fbb81271c77e83/merged","io.kubernetes.cri-o.Name":"k8s_storage-pro
visioner_storage-provisioner_kube-system_86765e31-1cfa-45da-88b5-221ee9f60924_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/597f903c9b9296d34676507e53cd95fc58b2543f0dab76d3e8b18fc612d5d68c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"597f903c9b9296d34676507e53cd95fc58b2543f0dab76d3e8b18fc612d5d68c","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_86765e31-1cfa-45da-88b5-221ee9f60924_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/86765e31-1cfa-45da-88b5-221ee9f60924/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pod
s/86765e31-1cfa-45da-88b5-221ee9f60924/containers/storage-provisioner/92656f04\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/86765e31-1cfa-45da-88b5-221ee9f60924/volumes/kubernetes.io~projected/kube-api-access-mr2tl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"86765e31-1cfa-45da-88b5-221ee9f60924","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisio
ner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2025-10-01T18:42:00.507793140Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I1001 18:42:49.390770  312953 cri.go:126] list returned 8 containers
	I1001 18:42:49.390778  312953 cri.go:129] container: {ID:031471c29fe491ccc0a000195c586756b08cd6ca26c1ff8ee3a10fdf0d13799d Status:stopped}
	I1001 18:42:49.390811  312953 cri.go:135] skipping {031471c29fe491ccc0a000195c586756b08cd6ca26c1ff8ee3a10fdf0d13799d stopped}: state = "stopped", want "paused"
	I1001 18:42:49.390820  312953 cri.go:129] container: {ID:1bc4db2f245434176f5bfbf081177c66f2fec87e033dbc00e447e91f55369da4 Status:stopped}
	I1001 18:42:49.390825  312953 cri.go:135] skipping {1bc4db2f245434176f5bfbf081177c66f2fec87e033dbc00e447e91f55369da4 stopped}: state = "stopped", want "paused"
	I1001 18:42:49.390829  312953 cri.go:129] container: {ID:21fc8b63b185c753f5cf8a2adec583344b019c1faa7255d30c6a95fa54ac38f9 Status:stopped}
	I1001 18:42:49.390837  312953 cri.go:135] skipping {21fc8b63b185c753f5cf8a2adec583344b019c1faa7255d30c6a95fa54ac38f9 stopped}: state = "stopped", want "paused"
	I1001 18:42:49.390842  312953 cri.go:129] container: {ID:61343dfb5481d51c31dd1240ee0c501be7cab6a31f11f0f0a933d07ed06f3554 Status:stopped}
	I1001 18:42:49.390847  312953 cri.go:135] skipping {61343dfb5481d51c31dd1240ee0c501be7cab6a31f11f0f0a933d07ed06f3554 stopped}: state = "stopped", want "paused"
	I1001 18:42:49.390850  312953 cri.go:129] container: {ID:676b0aa650d37940b183e0db13563041aa747e8a9bfb8c5d3c1dca57e8c863a7 Status:stopped}
	I1001 18:42:49.390855  312953 cri.go:135] skipping {676b0aa650d37940b183e0db13563041aa747e8a9bfb8c5d3c1dca57e8c863a7 stopped}: state = "stopped", want "paused"
	I1001 18:42:49.390861  312953 cri.go:129] container: {ID:9938a357359632e9f70e081946151aae6d28a9ff1558228d0b39683674508989 Status:stopped}
	I1001 18:42:49.390865  312953 cri.go:135] skipping {9938a357359632e9f70e081946151aae6d28a9ff1558228d0b39683674508989 stopped}: state = "stopped", want "paused"
	I1001 18:42:49.390869  312953 cri.go:129] container: {ID:aa72452d845c09263cb61af321fd78c71e23ea084c71550e0b7f1bc9fcb3d4b1 Status:stopped}
	I1001 18:42:49.390873  312953 cri.go:135] skipping {aa72452d845c09263cb61af321fd78c71e23ea084c71550e0b7f1bc9fcb3d4b1 stopped}: state = "stopped", want "paused"
	I1001 18:42:49.390878  312953 cri.go:129] container: {ID:e90141aa62b9bb79ff16ebca4945afba872156b02cc6c83e2c421d7a637dae3c Status:stopped}
	I1001 18:42:49.390882  312953 cri.go:135] skipping {e90141aa62b9bb79ff16ebca4945afba872156b02cc6c83e2c421d7a637dae3c stopped}: state = "stopped", want "paused"
	I1001 18:42:49.390943  312953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 18:42:49.400084  312953 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 18:42:49.400103  312953 kubeadm.go:589] restartPrimaryControlPlane start ...
	I1001 18:42:49.400157  312953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 18:42:49.408609  312953 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:42:49.409140  312953 kubeconfig.go:125] found "functional-246462" server: "https://192.168.49.2:8441"
	I1001 18:42:49.410417  312953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 18:42:49.419595  312953 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-01 18:40:56.721229634 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-01 18:42:48.690584635 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1001 18:42:49.419604  312953 kubeadm.go:1152] stopping kube-system containers ...
	I1001 18:42:49.419615  312953 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1001 18:42:49.419670  312953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 18:42:49.456538  312953 cri.go:89] found id: "9938a357359632e9f70e081946151aae6d28a9ff1558228d0b39683674508989"
	I1001 18:42:49.456549  312953 cri.go:89] found id: "031471c29fe491ccc0a000195c586756b08cd6ca26c1ff8ee3a10fdf0d13799d"
	I1001 18:42:49.456553  312953 cri.go:89] found id: "676b0aa650d37940b183e0db13563041aa747e8a9bfb8c5d3c1dca57e8c863a7"
	I1001 18:42:49.456555  312953 cri.go:89] found id: "21fc8b63b185c753f5cf8a2adec583344b019c1faa7255d30c6a95fa54ac38f9"
	I1001 18:42:49.456557  312953 cri.go:89] found id: "e90141aa62b9bb79ff16ebca4945afba872156b02cc6c83e2c421d7a637dae3c"
	I1001 18:42:49.456570  312953 cri.go:89] found id: "61343dfb5481d51c31dd1240ee0c501be7cab6a31f11f0f0a933d07ed06f3554"
	I1001 18:42:49.456572  312953 cri.go:89] found id: "aa72452d845c09263cb61af321fd78c71e23ea084c71550e0b7f1bc9fcb3d4b1"
	I1001 18:42:49.456575  312953 cri.go:89] found id: "1bc4db2f245434176f5bfbf081177c66f2fec87e033dbc00e447e91f55369da4"
	I1001 18:42:49.456577  312953 cri.go:89] found id: ""
	I1001 18:42:49.456582  312953 cri.go:252] Stopping containers: [9938a357359632e9f70e081946151aae6d28a9ff1558228d0b39683674508989 031471c29fe491ccc0a000195c586756b08cd6ca26c1ff8ee3a10fdf0d13799d 676b0aa650d37940b183e0db13563041aa747e8a9bfb8c5d3c1dca57e8c863a7 21fc8b63b185c753f5cf8a2adec583344b019c1faa7255d30c6a95fa54ac38f9 e90141aa62b9bb79ff16ebca4945afba872156b02cc6c83e2c421d7a637dae3c 61343dfb5481d51c31dd1240ee0c501be7cab6a31f11f0f0a933d07ed06f3554 aa72452d845c09263cb61af321fd78c71e23ea084c71550e0b7f1bc9fcb3d4b1 1bc4db2f245434176f5bfbf081177c66f2fec87e033dbc00e447e91f55369da4]
	I1001 18:42:49.456640  312953 ssh_runner.go:195] Run: which crictl
	I1001 18:42:49.460299  312953 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 9938a357359632e9f70e081946151aae6d28a9ff1558228d0b39683674508989 031471c29fe491ccc0a000195c586756b08cd6ca26c1ff8ee3a10fdf0d13799d 676b0aa650d37940b183e0db13563041aa747e8a9bfb8c5d3c1dca57e8c863a7 21fc8b63b185c753f5cf8a2adec583344b019c1faa7255d30c6a95fa54ac38f9 e90141aa62b9bb79ff16ebca4945afba872156b02cc6c83e2c421d7a637dae3c 61343dfb5481d51c31dd1240ee0c501be7cab6a31f11f0f0a933d07ed06f3554 aa72452d845c09263cb61af321fd78c71e23ea084c71550e0b7f1bc9fcb3d4b1 1bc4db2f245434176f5bfbf081177c66f2fec87e033dbc00e447e91f55369da4
	I1001 18:42:49.535178  312953 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 18:42:49.649837  312953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 18:42:49.658848  312953 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  1 18:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  1 18:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  1 18:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  1 18:41 /etc/kubernetes/scheduler.conf
	
	I1001 18:42:49.658904  312953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1001 18:42:49.668507  312953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1001 18:42:49.677327  312953 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:42:49.677380  312953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 18:42:49.686038  312953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1001 18:42:49.694874  312953 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:42:49.694944  312953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 18:42:49.703841  312953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1001 18:42:49.712590  312953 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 18:42:49.712643  312953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 18:42:49.721058  312953 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 18:42:49.729726  312953 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:42:49.775644  312953 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:42:53.921075  312953 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (4.145405553s)
	I1001 18:42:53.921096  312953 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:42:54.117712  312953 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:42:54.196795  312953 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:42:54.266578  312953 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:42:54.266650  312953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:42:54.767323  312953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:42:55.267737  312953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:42:55.300644  312953 api_server.go:72] duration metric: took 1.034066043s to wait for apiserver process to appear ...
	I1001 18:42:55.300658  312953 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:42:55.300683  312953 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1001 18:42:58.505102  312953 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 18:42:58.505118  312953 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 18:42:58.505130  312953 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1001 18:42:58.623461  312953 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 18:42:58.623477  312953 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 18:42:58.801625  312953 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1001 18:42:58.815385  312953 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 18:42:58.815424  312953 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 18:42:59.300821  312953 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1001 18:42:59.312274  312953 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 18:42:59.312289  312953 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 18:42:59.801071  312953 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1001 18:42:59.815009  312953 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 18:42:59.815026  312953 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 18:43:00.302778  312953 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1001 18:43:00.321082  312953 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1001 18:43:00.339555  312953 api_server.go:141] control plane version: v1.34.1
	I1001 18:43:00.339575  312953 api_server.go:131] duration metric: took 5.038910634s to wait for apiserver health ...
	I1001 18:43:00.339584  312953 cni.go:84] Creating CNI manager for ""
	I1001 18:43:00.339590  312953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 18:43:00.347648  312953 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1001 18:43:00.352420  312953 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 18:43:00.357878  312953 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1001 18:43:00.357891  312953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 18:43:00.406243  312953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 18:43:00.952266  312953 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:43:00.955934  312953 system_pods.go:59] 8 kube-system pods found
	I1001 18:43:00.955958  312953 system_pods.go:61] "coredns-66bc5c9577-m4t97" [a2294a5e-be99-41a0-bdb1-0bf29937e2c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:43:00.955966  312953 system_pods.go:61] "etcd-functional-246462" [bcbb969a-538e-447b-9588-ef604b6ee2e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:43:00.955971  312953 system_pods.go:61] "kindnet-9vmxg" [de80e7b4-16f6-4634-bb7a-d410f3f01bdc] Running
	I1001 18:43:00.955977  312953 system_pods.go:61] "kube-apiserver-functional-246462" [3efc8f78-ab64-4e00-919a-fb263946a816] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:43:00.955984  312953 system_pods.go:61] "kube-controller-manager-functional-246462" [62107f6b-8b69-4e72-9a66-c1c8ed24eeaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 18:43:00.955988  312953 system_pods.go:61] "kube-proxy-4h9qw" [762650e3-d5fa-4b26-a594-4e8bcfc47fdb] Running
	I1001 18:43:00.956001  312953 system_pods.go:61] "kube-scheduler-functional-246462" [38dc5cfd-43b9-4884-ae8f-b70f11dcb268] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:43:00.956005  312953 system_pods.go:61] "storage-provisioner" [86765e31-1cfa-45da-88b5-221ee9f60924] Running
	I1001 18:43:00.956011  312953 system_pods.go:74] duration metric: took 3.733841ms to wait for pod list to return data ...
	I1001 18:43:00.956018  312953 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:43:00.958559  312953 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1001 18:43:00.958578  312953 node_conditions.go:123] node cpu capacity is 2
	I1001 18:43:00.958587  312953 node_conditions.go:105] duration metric: took 2.566056ms to run NodePressure ...
	I1001 18:43:00.958603  312953 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 18:43:01.232738  312953 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I1001 18:43:01.236317  312953 kubeadm.go:735] kubelet initialised
	I1001 18:43:01.236328  312953 kubeadm.go:736] duration metric: took 3.576178ms waiting for restarted kubelet to initialise ...
	I1001 18:43:01.236342  312953 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 18:43:01.243828  312953 ops.go:34] apiserver oom_adj: -16
	I1001 18:43:01.243839  312953 kubeadm.go:593] duration metric: took 11.843731524s to restartPrimaryControlPlane
	I1001 18:43:01.243848  312953 kubeadm.go:394] duration metric: took 11.916346058s to StartCluster
	I1001 18:43:01.243873  312953 settings.go:142] acquiring lock: {Name:mkd3d3b21fb3f2e0bfee200edb8bfa6f57a6455f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:43:01.243954  312953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21631-288146/kubeconfig
	I1001 18:43:01.244580  312953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21631-288146/kubeconfig: {Name:mkf64803b00ff38d43d452cf5741b7023d24d24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 18:43:01.244820  312953 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 18:43:01.245070  312953 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:43:01.245107  312953 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 18:43:01.245169  312953 addons.go:69] Setting storage-provisioner=true in profile "functional-246462"
	I1001 18:43:01.245182  312953 addons.go:238] Setting addon storage-provisioner=true in "functional-246462"
	W1001 18:43:01.245186  312953 addons.go:247] addon storage-provisioner should already be in state true
	I1001 18:43:01.245206  312953 host.go:66] Checking if "functional-246462" exists ...
	I1001 18:43:01.245626  312953 cli_runner.go:164] Run: docker container inspect functional-246462 --format={{.State.Status}}
	I1001 18:43:01.246232  312953 addons.go:69] Setting default-storageclass=true in profile "functional-246462"
	I1001 18:43:01.246248  312953 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-246462"
	I1001 18:43:01.246599  312953 cli_runner.go:164] Run: docker container inspect functional-246462 --format={{.State.Status}}
	I1001 18:43:01.248257  312953 out.go:179] * Verifying Kubernetes components...
	I1001 18:43:01.251329  312953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 18:43:01.279787  312953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 18:43:01.282750  312953 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:43:01.282762  312953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 18:43:01.282900  312953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
	I1001 18:43:01.292938  312953 addons.go:238] Setting addon default-storageclass=true in "functional-246462"
	W1001 18:43:01.292948  312953 addons.go:247] addon default-storageclass should already be in state true
	I1001 18:43:01.292972  312953 host.go:66] Checking if "functional-246462" exists ...
	I1001 18:43:01.293379  312953 cli_runner.go:164] Run: docker container inspect functional-246462 --format={{.State.Status}}
	I1001 18:43:01.315503  312953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
	I1001 18:43:01.337766  312953 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 18:43:01.337779  312953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 18:43:01.337836  312953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
	I1001 18:43:01.367978  312953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
	I1001 18:43:01.453877  312953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 18:43:01.470582  312953 node_ready.go:35] waiting up to 6m0s for node "functional-246462" to be "Ready" ...
	I1001 18:43:01.474885  312953 node_ready.go:49] node "functional-246462" is "Ready"
	I1001 18:43:01.474901  312953 node_ready.go:38] duration metric: took 4.288318ms for node "functional-246462" to be "Ready" ...
	I1001 18:43:01.474913  312953 api_server.go:52] waiting for apiserver process to appear ...
	I1001 18:43:01.474970  312953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:43:01.478586  312953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 18:43:01.494849  312953 api_server.go:72] duration metric: took 250.002145ms to wait for apiserver process to appear ...
	I1001 18:43:01.494864  312953 api_server.go:88] waiting for apiserver healthz status ...
	I1001 18:43:01.494883  312953 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1001 18:43:01.505389  312953 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1001 18:43:01.506645  312953 api_server.go:141] control plane version: v1.34.1
	I1001 18:43:01.506658  312953 api_server.go:131] duration metric: took 11.789799ms to wait for apiserver health ...
	I1001 18:43:01.506665  312953 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 18:43:01.510512  312953 system_pods.go:59] 8 kube-system pods found
	I1001 18:43:01.510531  312953 system_pods.go:61] "coredns-66bc5c9577-m4t97" [a2294a5e-be99-41a0-bdb1-0bf29937e2c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:43:01.510538  312953 system_pods.go:61] "etcd-functional-246462" [bcbb969a-538e-447b-9588-ef604b6ee2e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:43:01.510543  312953 system_pods.go:61] "kindnet-9vmxg" [de80e7b4-16f6-4634-bb7a-d410f3f01bdc] Running
	I1001 18:43:01.510554  312953 system_pods.go:61] "kube-apiserver-functional-246462" [3efc8f78-ab64-4e00-919a-fb263946a816] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:43:01.510560  312953 system_pods.go:61] "kube-controller-manager-functional-246462" [62107f6b-8b69-4e72-9a66-c1c8ed24eeaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 18:43:01.510564  312953 system_pods.go:61] "kube-proxy-4h9qw" [762650e3-d5fa-4b26-a594-4e8bcfc47fdb] Running
	I1001 18:43:01.510569  312953 system_pods.go:61] "kube-scheduler-functional-246462" [38dc5cfd-43b9-4884-ae8f-b70f11dcb268] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:43:01.510574  312953 system_pods.go:61] "storage-provisioner" [86765e31-1cfa-45da-88b5-221ee9f60924] Running
	I1001 18:43:01.510579  312953 system_pods.go:74] duration metric: took 3.909024ms to wait for pod list to return data ...
	I1001 18:43:01.510585  312953 default_sa.go:34] waiting for default service account to be created ...
	I1001 18:43:01.513646  312953 default_sa.go:45] found service account: "default"
	I1001 18:43:01.513659  312953 default_sa.go:55] duration metric: took 3.069972ms for default service account to be created ...
	I1001 18:43:01.513667  312953 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 18:43:01.516660  312953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 18:43:01.520893  312953 system_pods.go:86] 8 kube-system pods found
	I1001 18:43:01.520910  312953 system_pods.go:89] "coredns-66bc5c9577-m4t97" [a2294a5e-be99-41a0-bdb1-0bf29937e2c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 18:43:01.520919  312953 system_pods.go:89] "etcd-functional-246462" [bcbb969a-538e-447b-9588-ef604b6ee2e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 18:43:01.520924  312953 system_pods.go:89] "kindnet-9vmxg" [de80e7b4-16f6-4634-bb7a-d410f3f01bdc] Running
	I1001 18:43:01.520929  312953 system_pods.go:89] "kube-apiserver-functional-246462" [3efc8f78-ab64-4e00-919a-fb263946a816] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 18:43:01.520935  312953 system_pods.go:89] "kube-controller-manager-functional-246462" [62107f6b-8b69-4e72-9a66-c1c8ed24eeaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 18:43:01.520939  312953 system_pods.go:89] "kube-proxy-4h9qw" [762650e3-d5fa-4b26-a594-4e8bcfc47fdb] Running
	I1001 18:43:01.520944  312953 system_pods.go:89] "kube-scheduler-functional-246462" [38dc5cfd-43b9-4884-ae8f-b70f11dcb268] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 18:43:01.520947  312953 system_pods.go:89] "storage-provisioner" [86765e31-1cfa-45da-88b5-221ee9f60924] Running
	I1001 18:43:01.520953  312953 system_pods.go:126] duration metric: took 7.281253ms to wait for k8s-apps to be running ...
	I1001 18:43:01.520960  312953 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 18:43:01.521016  312953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:43:02.305848  312953 system_svc.go:56] duration metric: took 784.869345ms WaitForService to wait for kubelet
	I1001 18:43:02.305889  312953 kubeadm.go:578] duration metric: took 1.061036263s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 18:43:02.305937  312953 node_conditions.go:102] verifying NodePressure condition ...
	I1001 18:43:02.310131  312953 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1001 18:43:02.310146  312953 node_conditions.go:123] node cpu capacity is 2
	I1001 18:43:02.310158  312953 node_conditions.go:105] duration metric: took 4.206521ms to run NodePressure ...
	I1001 18:43:02.310169  312953 start.go:241] waiting for startup goroutines ...
	I1001 18:43:02.318444  312953 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1001 18:43:02.321354  312953 addons.go:514] duration metric: took 1.07623085s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 18:43:02.321406  312953 start.go:246] waiting for cluster config update ...
	I1001 18:43:02.321418  312953 start.go:255] writing updated cluster config ...
	I1001 18:43:02.321752  312953 ssh_runner.go:195] Run: rm -f paused
	I1001 18:43:02.326004  312953 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:43:02.332097  312953 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m4t97" in "kube-system" namespace to be "Ready" or be gone ...
	W1001 18:43:04.338589  312953 pod_ready.go:104] pod "coredns-66bc5c9577-m4t97" is not "Ready", error: <nil>
	W1001 18:43:06.837436  312953 pod_ready.go:104] pod "coredns-66bc5c9577-m4t97" is not "Ready", error: <nil>
	I1001 18:43:07.337888  312953 pod_ready.go:94] pod "coredns-66bc5c9577-m4t97" is "Ready"
	I1001 18:43:07.337903  312953 pod_ready.go:86] duration metric: took 5.005791662s for pod "coredns-66bc5c9577-m4t97" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:43:07.340732  312953 pod_ready.go:83] waiting for pod "etcd-functional-246462" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:43:07.345163  312953 pod_ready.go:94] pod "etcd-functional-246462" is "Ready"
	I1001 18:43:07.345176  312953 pod_ready.go:86] duration metric: took 4.432198ms for pod "etcd-functional-246462" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:43:07.347376  312953 pod_ready.go:83] waiting for pod "kube-apiserver-functional-246462" in "kube-system" namespace to be "Ready" or be gone ...
	W1001 18:43:09.353243  312953 pod_ready.go:104] pod "kube-apiserver-functional-246462" is not "Ready", error: <nil>
	W1001 18:43:11.353470  312953 pod_ready.go:104] pod "kube-apiserver-functional-246462" is not "Ready", error: <nil>
	I1001 18:43:11.853436  312953 pod_ready.go:94] pod "kube-apiserver-functional-246462" is "Ready"
	I1001 18:43:11.853450  312953 pod_ready.go:86] duration metric: took 4.506056462s for pod "kube-apiserver-functional-246462" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:43:11.856781  312953 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-246462" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:43:11.863093  312953 pod_ready.go:94] pod "kube-controller-manager-functional-246462" is "Ready"
	I1001 18:43:11.863108  312953 pod_ready.go:86] duration metric: took 6.31086ms for pod "kube-controller-manager-functional-246462" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:43:11.865554  312953 pod_ready.go:83] waiting for pod "kube-proxy-4h9qw" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:43:11.870172  312953 pod_ready.go:94] pod "kube-proxy-4h9qw" is "Ready"
	I1001 18:43:11.870197  312953 pod_ready.go:86] duration metric: took 4.620828ms for pod "kube-proxy-4h9qw" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:43:11.936272  312953 pod_ready.go:83] waiting for pod "kube-scheduler-functional-246462" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:43:13.941316  312953 pod_ready.go:94] pod "kube-scheduler-functional-246462" is "Ready"
	I1001 18:43:13.941331  312953 pod_ready.go:86] duration metric: took 2.005046545s for pod "kube-scheduler-functional-246462" in "kube-system" namespace to be "Ready" or be gone ...
	I1001 18:43:13.941341  312953 pod_ready.go:40] duration metric: took 11.615313421s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1001 18:43:13.999555  312953 start.go:620] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1001 18:43:14.002611  312953 out.go:179] * Done! kubectl is now configured to use "functional-246462" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 18:43:50 functional-246462 crio[4141]: time="2025-10-01 18:43:50.302152184Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-r8rnr Namespace:default ID:42ad64c784f68c84c9aafb28171ddc5afde6973cff017ba1608e3daebb80c049 UID:69fc6257-3746-42b7-9e61-d0c33627ab4b NetNS:/var/run/netns/d23fa44d-3ae9-4b28-8342-d1ccc9ecf69e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 01 18:43:50 functional-246462 crio[4141]: time="2025-10-01 18:43:50.302301158Z" level=info msg="Checking pod default_hello-node-75c85bcc94-r8rnr for CNI network kindnet (type=ptp)"
	Oct 01 18:43:50 functional-246462 crio[4141]: time="2025-10-01 18:43:50.304890133Z" level=info msg="Ran pod sandbox 42ad64c784f68c84c9aafb28171ddc5afde6973cff017ba1608e3daebb80c049 with infra container: default/hello-node-75c85bcc94-r8rnr/POD" id=42147878-ec89-4b23-bc6e-371eab563364 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 01 18:43:50 functional-246462 crio[4141]: time="2025-10-01 18:43:50.307211420Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1a545bf9-4890-4cdf-92ec-433dc1c06b0f name=/runtime.v1.ImageService/PullImage
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.289908141Z" level=info msg="Stopping pod sandbox: 131e6172314cb00f65733cb64a7cb8a5d3b5fb150d3b46d0bfd4778b61879f80" id=f98f5eea-42cf-42ed-9dd3-0e3eb58b6a20 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.289959843Z" level=info msg="Stopped pod sandbox (already stopped): 131e6172314cb00f65733cb64a7cb8a5d3b5fb150d3b46d0bfd4778b61879f80" id=f98f5eea-42cf-42ed-9dd3-0e3eb58b6a20 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.290759952Z" level=info msg="Removing pod sandbox: 131e6172314cb00f65733cb64a7cb8a5d3b5fb150d3b46d0bfd4778b61879f80" id=01439126-ceb4-43f3-993f-04ad9acb3263 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.298480854Z" level=info msg="Removed pod sandbox: 131e6172314cb00f65733cb64a7cb8a5d3b5fb150d3b46d0bfd4778b61879f80" id=01439126-ceb4-43f3-993f-04ad9acb3263 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.299026601Z" level=info msg="Stopping pod sandbox: 4a555a7f4bc167a3ae2b79f37fc5848032cee9da7de4bb5dee36f029985b7528" id=5ecb206c-5c34-4a6c-b9df-e00f060870ec name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.299058733Z" level=info msg="Stopped pod sandbox (already stopped): 4a555a7f4bc167a3ae2b79f37fc5848032cee9da7de4bb5dee36f029985b7528" id=5ecb206c-5c34-4a6c-b9df-e00f060870ec name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.302949256Z" level=info msg="Removing pod sandbox: 4a555a7f4bc167a3ae2b79f37fc5848032cee9da7de4bb5dee36f029985b7528" id=abb84903-a972-4733-af33-f1d815a81ffd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.315657533Z" level=info msg="Removed pod sandbox: 4a555a7f4bc167a3ae2b79f37fc5848032cee9da7de4bb5dee36f029985b7528" id=abb84903-a972-4733-af33-f1d815a81ffd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.316154410Z" level=info msg="Stopping pod sandbox: e9871d02e8397cba234a793b5521c6d2ca3321a88301e4d2ebed84d4e5af2b99" id=9e974cd3-8c5d-4cb0-9e20-91bb34a223f1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.316190685Z" level=info msg="Stopped pod sandbox (already stopped): e9871d02e8397cba234a793b5521c6d2ca3321a88301e4d2ebed84d4e5af2b99" id=9e974cd3-8c5d-4cb0-9e20-91bb34a223f1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.316548466Z" level=info msg="Removing pod sandbox: e9871d02e8397cba234a793b5521c6d2ca3321a88301e4d2ebed84d4e5af2b99" id=671d5342-043f-4d5f-bfb4-260401ad8220 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 18:43:54 functional-246462 crio[4141]: time="2025-10-01 18:43:54.325147427Z" level=info msg="Removed pod sandbox: e9871d02e8397cba234a793b5521c6d2ca3321a88301e4d2ebed84d4e5af2b99" id=671d5342-043f-4d5f-bfb4-260401ad8220 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 01 18:44:01 functional-246462 crio[4141]: time="2025-10-01 18:44:01.284334436Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c0108989-ca5c-49d1-a3d8-c6194904ace9 name=/runtime.v1.ImageService/PullImage
	Oct 01 18:44:12 functional-246462 crio[4141]: time="2025-10-01 18:44:12.283901255Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f3513e53-edd7-4859-b605-77152ba88290 name=/runtime.v1.ImageService/PullImage
	Oct 01 18:44:26 functional-246462 crio[4141]: time="2025-10-01 18:44:26.283496822Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f4fa85b5-7577-43b2-bea2-3b0cf9192430 name=/runtime.v1.ImageService/PullImage
	Oct 01 18:45:03 functional-246462 crio[4141]: time="2025-10-01 18:45:03.284125283Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7ea28ffd-79c4-44ef-8c96-d6e1c835c020 name=/runtime.v1.ImageService/PullImage
	Oct 01 18:45:18 functional-246462 crio[4141]: time="2025-10-01 18:45:18.285195179Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=dc31c3a5-8644-4c1c-8bf6-5e8f655cabfe name=/runtime.v1.ImageService/PullImage
	Oct 01 18:46:35 functional-246462 crio[4141]: time="2025-10-01 18:46:35.284171802Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d7167d49-86d7-48f4-9340-c95921261b6c name=/runtime.v1.ImageService/PullImage
	Oct 01 18:46:49 functional-246462 crio[4141]: time="2025-10-01 18:46:49.283778032Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=905fc4b3-5ffd-48ab-befc-b064d8cb1fe8 name=/runtime.v1.ImageService/PullImage
	Oct 01 18:49:16 functional-246462 crio[4141]: time="2025-10-01 18:49:16.283832122Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a0acbfc3-6d56-4bf2-b114-38054e49f126 name=/runtime.v1.ImageService/PullImage
	Oct 01 18:49:41 functional-246462 crio[4141]: time="2025-10-01 18:49:41.284158623Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=382c6837-4556-4a59-a331-5f2f8003c2a8 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b72e314f15c3       docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc   9 minutes ago       Running             myfrontend                0                   47f067804a17f       sp-pod
	ca9aa606ca55d       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8   10 minutes ago      Running             nginx                     0                   b37e87285ff7d       nginx-svc
	10a7f1e757c88       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   ce2b2d58d1124       coredns-66bc5c9577-m4t97
	d544fd72d90e6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   605dc2d08819a       kindnet-9vmxg
	17eea92035c83       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   597f903c9b929       storage-provisioner
	1c8b3441b5021       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   8cb8db5c04d37       kube-proxy-4h9qw
	bc71ffbe37320       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   9a1fd86028c45       kube-apiserver-functional-246462
	b3bb2dfd7006c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   29be358a2b036       kube-controller-manager-functional-246462
	120f164b3f7a3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   9b58fb5089f8e       etcd-functional-246462
	883ef5c3454d8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   875115c4733b7       kube-scheduler-functional-246462
	9938a35735963       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   ce2b2d58d1124       coredns-66bc5c9577-m4t97
	031471c29fe49       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   29be358a2b036       kube-controller-manager-functional-246462
	676b0aa650d37       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   875115c4733b7       kube-scheduler-functional-246462
	21fc8b63b185c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   605dc2d08819a       kindnet-9vmxg
	e90141aa62b9b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   597f903c9b929       storage-provisioner
	aa72452d845c0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   9b58fb5089f8e       etcd-functional-246462
	1bc4db2f24543       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   8cb8db5c04d37       kube-proxy-4h9qw
	
	
	==> coredns [10a7f1e757c888363898f799f9689ff82bc5701bfb4ba82cc16f0a1d27465a4c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52680 - 55054 "HINFO IN 4152967681726799845.8623451924069912885. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048027253s
	
	
	==> coredns [9938a357359632e9f70e081946151aae6d28a9ff1558228d0b39683674508989] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50035 - 33741 "HINFO IN 7769235140052978731.4629145515213696371. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011339006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-246462
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-246462
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de12e0f54d226aca16c1f78311795f5ec99c1492
	                    minikube.k8s.io/name=functional-246462
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_01T18_41_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Oct 2025 18:41:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-246462
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Oct 2025 18:53:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Oct 2025 18:52:31 +0000   Wed, 01 Oct 2025 18:41:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Oct 2025 18:52:31 +0000   Wed, 01 Oct 2025 18:41:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Oct 2025 18:52:31 +0000   Wed, 01 Oct 2025 18:41:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Oct 2025 18:52:31 +0000   Wed, 01 Oct 2025 18:42:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-246462
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c05be85832944ed9e9df02ad0914543
	  System UUID:                7539dab7-9200-4759-a994-47b44122bbd3
	  Boot ID:                    51f8feb8-87ca-412f-9e3b-3711f0b1f6a5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-r8rnr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  default                     hello-node-connect-7d85dfc575-9dbdh          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-m4t97                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-246462                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-9vmxg                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-246462             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-246462    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4h9qw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-246462             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-246462 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m (x9 over 12m)  kubelet          Node functional-246462 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-246462 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-246462 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-246462 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-246462 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-246462 event: Registered Node functional-246462 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-246462 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-246462 event: Registered Node functional-246462 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-246462 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-246462 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-246462 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-246462 event: Registered Node functional-246462 in Controller
	
	
	==> dmesg <==
	[Oct 1 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015655] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.519694] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034329] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761925] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.736328] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 1 17:30] hrtimer: interrupt took 17924701 ns
	[Oct 1 18:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [120f164b3f7a351955fbe42d3b67b3def1bf312b54f109289f5776ecadb9cbc0] <==
	{"level":"warn","ts":"2025-10-01T18:42:57.222621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.241504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.262816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.278233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.293010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.310265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.349744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.368023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.379453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.404595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.421618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.436293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.458877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.474901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.515723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.538593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.556585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.574125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.606732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.630806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.677321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:57.706846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56808","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-01T18:52:55.981837Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1132}
	{"level":"info","ts":"2025-10-01T18:52:56.007149Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1132,"took":"24.774575ms","hash":3551581028,"current-db-size-bytes":3158016,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1388544,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-01T18:52:56.007203Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3551581028,"revision":1132,"compact-revision":-1}
	
	
	==> etcd [aa72452d845c09263cb61af321fd78c71e23ea084c71550e0b7f1bc9fcb3d4b1] <==
	{"level":"warn","ts":"2025-10-01T18:42:17.622321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:17.640537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:17.659673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:17.685714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:17.706453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:17.719955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-01T18:42:17.774714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39470","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-01T18:42:41.312124Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-01T18:42:41.312178Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-246462","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-01T18:42:41.312275Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-01T18:42:41.594251Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-01T18:42:41.595754Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-01T18:42:41.595813Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-01T18:42:41.595870Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-01T18:42:41.595881Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-01T18:42:41.595857Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-01T18:42:41.595944Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-01T18:42:41.595959Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-01T18:42:41.595966Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-01T18:42:41.596024Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-01T18:42:41.596036Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-01T18:42:41.599896Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-01T18:42:41.599984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-01T18:42:41.600039Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-01T18:42:41.600071Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-246462","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 18:53:36 up  1:36,  0 users,  load average: 0.14, 0.32, 1.20
	Linux functional-246462 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [21fc8b63b185c753f5cf8a2adec583344b019c1faa7255d30c6a95fa54ac38f9] <==
	I1001 18:42:14.387278       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1001 18:42:14.395797       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1001 18:42:14.395937       1 main.go:148] setting mtu 1500 for CNI 
	I1001 18:42:14.395955       1 main.go:178] kindnetd IP family: "ipv4"
	I1001 18:42:14.395975       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-01T18:42:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1001 18:42:14.529030       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1001 18:42:14.529103       1 controller.go:381] "Waiting for informer caches to sync"
	I1001 18:42:14.529137       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1001 18:42:14.597165       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1001 18:42:19.194874       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1001 18:42:19.194996       1 metrics.go:72] Registering metrics
	I1001 18:42:19.195098       1 controller.go:711] "Syncing nftables rules"
	I1001 18:42:24.530399       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:42:24.530467       1 main.go:301] handling current node
	I1001 18:42:34.531395       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:42:34.531450       1 main.go:301] handling current node
	
	
	==> kindnet [d544fd72d90e62f84fd9b1b47c09c30243ff9e338508edec4e7219b95e6ef5a3] <==
	I1001 18:51:30.096174       1 main.go:301] handling current node
	I1001 18:51:40.096004       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:51:40.096150       1 main.go:301] handling current node
	I1001 18:51:50.095682       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:51:50.095720       1 main.go:301] handling current node
	I1001 18:52:00.104754       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:52:00.104798       1 main.go:301] handling current node
	I1001 18:52:10.096060       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:52:10.096101       1 main.go:301] handling current node
	I1001 18:52:20.095945       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:52:20.095984       1 main.go:301] handling current node
	I1001 18:52:30.095946       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:52:30.095988       1 main.go:301] handling current node
	I1001 18:52:40.095852       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:52:40.095890       1 main.go:301] handling current node
	I1001 18:52:50.095449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:52:50.095486       1 main.go:301] handling current node
	I1001 18:53:00.202727       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:53:00.202832       1 main.go:301] handling current node
	I1001 18:53:10.095269       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:53:10.095344       1 main.go:301] handling current node
	I1001 18:53:20.095211       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:53:20.095249       1 main.go:301] handling current node
	I1001 18:53:30.095417       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1001 18:53:30.095455       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bc71ffbe3732074ea695ba86b3074d0aa65630de61cbfba01a6b1903f7d963bb] <==
	I1001 18:42:58.688643       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1001 18:42:58.694393       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1001 18:42:58.695239       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 18:42:58.702224       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1001 18:42:58.702771       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1001 18:42:58.747060       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1001 18:42:58.747096       1 policy_source.go:240] refreshing policies
	E1001 18:42:58.754138       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1001 18:42:58.767143       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 18:42:59.315706       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1001 18:42:59.402673       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1001 18:43:00.944976       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1001 18:43:01.093526       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1001 18:43:01.170861       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 18:43:01.179820       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 18:43:02.225799       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 18:43:02.374696       1 controller.go:667] quota admission added evaluator for: endpoints
	I1001 18:43:02.425731       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1001 18:43:17.889194       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.238.187"}
	I1001 18:43:24.715287       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.154.55"}
	I1001 18:43:34.429677       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.203.100"}
	E1001 18:43:41.596001       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40138: use of closed network connection
	E1001 18:43:49.834639       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47310: use of closed network connection
	I1001 18:43:50.045689       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.72.197"}
	I1001 18:52:58.642861       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [031471c29fe491ccc0a000195c586756b08cd6ca26c1ff8ee3a10fdf0d13799d] <==
	I1001 18:42:22.009965       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1001 18:42:22.014836       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1001 18:42:22.022070       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1001 18:42:22.022204       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1001 18:42:22.022290       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1001 18:42:22.023553       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1001 18:42:22.025389       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1001 18:42:22.026246       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1001 18:42:22.027498       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1001 18:42:22.029326       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1001 18:42:22.029829       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1001 18:42:22.031050       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1001 18:42:22.035325       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1001 18:42:22.037590       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1001 18:42:22.039895       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1001 18:42:22.054168       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1001 18:42:22.054180       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1001 18:42:22.054266       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1001 18:42:22.054274       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1001 18:42:22.054325       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1001 18:42:22.054518       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1001 18:42:22.055484       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1001 18:42:22.056543       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1001 18:42:22.066745       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1001 18:42:22.066813       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [b3bb2dfd7006c673955fe6dab6e6a4b86ef4ff479dfb7dd8f0b124842994c193] <==
	I1001 18:43:02.023707       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1001 18:43:02.023729       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1001 18:43:02.025258       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1001 18:43:02.025365       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1001 18:43:02.025379       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1001 18:43:02.033034       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1001 18:43:02.039490       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1001 18:43:02.046871       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1001 18:43:02.054364       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1001 18:43:02.054469       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1001 18:43:02.055719       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1001 18:43:02.059094       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1001 18:43:02.062441       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1001 18:43:02.064739       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1001 18:43:02.068779       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1001 18:43:02.068912       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1001 18:43:02.068948       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1001 18:43:02.068979       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1001 18:43:02.069068       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1001 18:43:02.069116       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1001 18:43:02.069761       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1001 18:43:02.070937       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1001 18:43:02.073305       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1001 18:43:02.073400       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1001 18:43:02.087096       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	
	
	==> kube-proxy [1bc4db2f245434176f5bfbf081177c66f2fec87e033dbc00e447e91f55369da4] <==
	I1001 18:42:18.311110       1 server_linux.go:53] "Using iptables proxy"
	I1001 18:42:19.261178       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1001 18:42:19.373967       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1001 18:42:19.374084       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1001 18:42:19.374239       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 18:42:19.396700       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1001 18:42:19.396819       1 server_linux.go:132] "Using iptables Proxier"
	I1001 18:42:19.401202       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 18:42:19.401542       1 server.go:527] "Version info" version="v1.34.1"
	I1001 18:42:19.401763       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:42:19.403083       1 config.go:200] "Starting service config controller"
	I1001 18:42:19.403143       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1001 18:42:19.403209       1 config.go:106] "Starting endpoint slice config controller"
	I1001 18:42:19.403243       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1001 18:42:19.403280       1 config.go:403] "Starting serviceCIDR config controller"
	I1001 18:42:19.403308       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1001 18:42:19.403983       1 config.go:309] "Starting node config controller"
	I1001 18:42:19.404061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1001 18:42:19.404094       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1001 18:42:19.510884       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1001 18:42:19.510972       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1001 18:42:19.511105       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [1c8b3441b5021a8c8c96dfe2f29f037c025a8c7fa9ecc41a3ccda6f8f5f81835] <==
	I1001 18:42:59.833902       1 server_linux.go:53] "Using iptables proxy"
	I1001 18:42:59.965288       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1001 18:43:00.065813       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1001 18:43:00.065855       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1001 18:43:00.065932       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 18:43:00.468417       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1001 18:43:00.468668       1 server_linux.go:132] "Using iptables Proxier"
	I1001 18:43:00.494035       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 18:43:00.494394       1 server.go:527] "Version info" version="v1.34.1"
	I1001 18:43:00.494423       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:43:00.509810       1 config.go:200] "Starting service config controller"
	I1001 18:43:00.509944       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1001 18:43:00.518212       1 config.go:106] "Starting endpoint slice config controller"
	I1001 18:43:00.518379       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1001 18:43:00.518461       1 config.go:403] "Starting serviceCIDR config controller"
	I1001 18:43:00.518606       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1001 18:43:00.523887       1 config.go:309] "Starting node config controller"
	I1001 18:43:00.523955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1001 18:43:00.523969       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1001 18:43:00.610192       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1001 18:43:00.619024       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1001 18:43:00.619071       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [676b0aa650d37940b183e0db13563041aa747e8a9bfb8c5d3c1dca57e8c863a7] <==
	I1001 18:42:16.051022       1 serving.go:386] Generated self-signed cert in-memory
	W1001 18:42:18.976061       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 18:42:18.976096       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 18:42:18.976107       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 18:42:18.976118       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 18:42:19.150223       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1001 18:42:19.150325       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:42:19.154926       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:42:19.155386       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:42:19.155109       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1001 18:42:19.155133       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 18:42:19.256264       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:42:41.315315       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1001 18:42:41.315502       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1001 18:42:41.315413       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:42:41.315668       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1001 18:42:41.315388       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1001 18:42:41.328142       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [883ef5c3454d8c699c15aff45ad0550ba6de33fae06a69c93aaf07af2c714be6] <==
	I1001 18:42:56.921853       1 serving.go:386] Generated self-signed cert in-memory
	I1001 18:42:59.189349       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1001 18:42:59.189459       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 18:42:59.194606       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1001 18:42:59.194697       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1001 18:42:59.194724       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1001 18:42:59.194759       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 18:42:59.197094       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:42:59.197123       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 18:42:59.197143       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:42:59.197157       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:42:59.295710       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1001 18:42:59.298600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1001 18:42:59.298682       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 01 18:52:54 functional-246462 kubelet[4478]: E1001 18:52:54.376575    4478 manager.go:1116] Failed to create existing container: /docker/e5772678338b65083ce0d90376854b881d1a0c4c4a4493ec0c64d956b3e82d42/crio-ce2b2d58d112469ab6060a50d9730c5a87c9089e326d0a4725cb03d020114fc3: Error finding container ce2b2d58d112469ab6060a50d9730c5a87c9089e326d0a4725cb03d020114fc3: Status 404 returned error can't find the container with id ce2b2d58d112469ab6060a50d9730c5a87c9089e326d0a4725cb03d020114fc3
	Oct 01 18:52:54 functional-246462 kubelet[4478]: E1001 18:52:54.376797    4478 manager.go:1116] Failed to create existing container: /crio-29be358a2b036b7672f996bb7c0deed650cd1fa2931e0549ba3d8967cbc17a5e: Error finding container 29be358a2b036b7672f996bb7c0deed650cd1fa2931e0549ba3d8967cbc17a5e: Status 404 returned error can't find the container with id 29be358a2b036b7672f996bb7c0deed650cd1fa2931e0549ba3d8967cbc17a5e
	Oct 01 18:52:54 functional-246462 kubelet[4478]: E1001 18:52:54.376989    4478 manager.go:1116] Failed to create existing container: /crio-875115c4733b7f61d4072506349676fccecf11f2916f3730aee70c3ac623f206: Error finding container 875115c4733b7f61d4072506349676fccecf11f2916f3730aee70c3ac623f206: Status 404 returned error can't find the container with id 875115c4733b7f61d4072506349676fccecf11f2916f3730aee70c3ac623f206
	Oct 01 18:52:54 functional-246462 kubelet[4478]: E1001 18:52:54.377191    4478 manager.go:1116] Failed to create existing container: /crio-131e6172314cb00f65733cb64a7cb8a5d3b5fb150d3b46d0bfd4778b61879f80: Error finding container 131e6172314cb00f65733cb64a7cb8a5d3b5fb150d3b46d0bfd4778b61879f80: Status 404 returned error can't find the container with id 131e6172314cb00f65733cb64a7cb8a5d3b5fb150d3b46d0bfd4778b61879f80
	Oct 01 18:52:54 functional-246462 kubelet[4478]: E1001 18:52:54.377431    4478 manager.go:1116] Failed to create existing container: /docker/e5772678338b65083ce0d90376854b881d1a0c4c4a4493ec0c64d956b3e82d42/crio-29be358a2b036b7672f996bb7c0deed650cd1fa2931e0549ba3d8967cbc17a5e: Error finding container 29be358a2b036b7672f996bb7c0deed650cd1fa2931e0549ba3d8967cbc17a5e: Status 404 returned error can't find the container with id 29be358a2b036b7672f996bb7c0deed650cd1fa2931e0549ba3d8967cbc17a5e
	Oct 01 18:52:54 functional-246462 kubelet[4478]: E1001 18:52:54.377695    4478 manager.go:1116] Failed to create existing container: /docker/e5772678338b65083ce0d90376854b881d1a0c4c4a4493ec0c64d956b3e82d42/crio-875115c4733b7f61d4072506349676fccecf11f2916f3730aee70c3ac623f206: Error finding container 875115c4733b7f61d4072506349676fccecf11f2916f3730aee70c3ac623f206: Status 404 returned error can't find the container with id 875115c4733b7f61d4072506349676fccecf11f2916f3730aee70c3ac623f206
	Oct 01 18:52:54 functional-246462 kubelet[4478]: E1001 18:52:54.377918    4478 manager.go:1116] Failed to create existing container: /crio-ce2b2d58d112469ab6060a50d9730c5a87c9089e326d0a4725cb03d020114fc3: Error finding container ce2b2d58d112469ab6060a50d9730c5a87c9089e326d0a4725cb03d020114fc3: Status 404 returned error can't find the container with id ce2b2d58d112469ab6060a50d9730c5a87c9089e326d0a4725cb03d020114fc3
	Oct 01 18:52:54 functional-246462 kubelet[4478]: E1001 18:52:54.378187    4478 manager.go:1116] Failed to create existing container: /crio-597f903c9b9296d34676507e53cd95fc58b2543f0dab76d3e8b18fc612d5d68c: Error finding container 597f903c9b9296d34676507e53cd95fc58b2543f0dab76d3e8b18fc612d5d68c: Status 404 returned error can't find the container with id 597f903c9b9296d34676507e53cd95fc58b2543f0dab76d3e8b18fc612d5d68c
	Oct 01 18:52:54 functional-246462 kubelet[4478]: E1001 18:52:54.378378    4478 manager.go:1116] Failed to create existing container: /docker/e5772678338b65083ce0d90376854b881d1a0c4c4a4493ec0c64d956b3e82d42/crio-9b58fb5089f8ed5f2ffffd08b437aad86115b75d78ec2763324b5beea1f55b8d: Error finding container 9b58fb5089f8ed5f2ffffd08b437aad86115b75d78ec2763324b5beea1f55b8d: Status 404 returned error can't find the container with id 9b58fb5089f8ed5f2ffffd08b437aad86115b75d78ec2763324b5beea1f55b8d
	Oct 01 18:52:54 functional-246462 kubelet[4478]: E1001 18:52:54.485836    4478 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759344774485527813 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218751} inodes_used:{value:89}}"
	Oct 01 18:52:54 functional-246462 kubelet[4478]: E1001 18:52:54.485874    4478 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759344774485527813 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218751} inodes_used:{value:89}}"
	Oct 01 18:53:01 functional-246462 kubelet[4478]: E1001 18:53:01.283237    4478 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-r8rnr" podUID="69fc6257-3746-42b7-9e61-d0c33627ab4b"
	Oct 01 18:53:04 functional-246462 kubelet[4478]: E1001 18:53:04.487318    4478 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759344784487019277 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218751} inodes_used:{value:89}}"
	Oct 01 18:53:04 functional-246462 kubelet[4478]: E1001 18:53:04.487360    4478 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759344784487019277 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218751} inodes_used:{value:89}}"
	Oct 01 18:53:05 functional-246462 kubelet[4478]: E1001 18:53:05.283234    4478 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-9dbdh" podUID="6b6b185f-4bbc-48a2-938c-f55645ccd36b"
	Oct 01 18:53:14 functional-246462 kubelet[4478]: E1001 18:53:14.488768    4478 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759344794488484671 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218751} inodes_used:{value:89}}"
	Oct 01 18:53:14 functional-246462 kubelet[4478]: E1001 18:53:14.488803    4478 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759344794488484671 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218751} inodes_used:{value:89}}"
	Oct 01 18:53:15 functional-246462 kubelet[4478]: E1001 18:53:15.282819    4478 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-r8rnr" podUID="69fc6257-3746-42b7-9e61-d0c33627ab4b"
	Oct 01 18:53:20 functional-246462 kubelet[4478]: E1001 18:53:20.283248    4478 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-9dbdh" podUID="6b6b185f-4bbc-48a2-938c-f55645ccd36b"
	Oct 01 18:53:24 functional-246462 kubelet[4478]: E1001 18:53:24.490489    4478 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759344804490243880 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218751} inodes_used:{value:89}}"
	Oct 01 18:53:24 functional-246462 kubelet[4478]: E1001 18:53:24.490525    4478 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759344804490243880 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218751} inodes_used:{value:89}}"
	Oct 01 18:53:29 functional-246462 kubelet[4478]: E1001 18:53:29.283822    4478 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-r8rnr" podUID="69fc6257-3746-42b7-9e61-d0c33627ab4b"
	Oct 01 18:53:31 functional-246462 kubelet[4478]: E1001 18:53:31.283576    4478 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-9dbdh" podUID="6b6b185f-4bbc-48a2-938c-f55645ccd36b"
	Oct 01 18:53:34 functional-246462 kubelet[4478]: E1001 18:53:34.492679    4478 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759344814492399043 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218751} inodes_used:{value:89}}"
	Oct 01 18:53:34 functional-246462 kubelet[4478]: E1001 18:53:34.492715    4478 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759344814492399043 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218751} inodes_used:{value:89}}"
	
	
	==> storage-provisioner [17eea92035c833be2a0975f643aede6059015a11154734d2b33517e0d5081e08] <==
	W1001 18:53:12.150472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:14.154419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:14.158743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:16.162011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:16.166833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:18.169406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:18.175978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:20.179371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:20.185079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:22.198178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:22.204681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:24.207841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:24.212270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:26.215757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:26.220153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:28.222987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:28.229764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:30.232478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:30.236892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:32.239557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:32.244189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:34.247330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:34.251860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:36.256638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:53:36.263984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e90141aa62b9bb79ff16ebca4945afba872156b02cc6c83e2c421d7a637dae3c] <==
	I1001 18:42:14.837563       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 18:42:19.216819       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 18:42:19.216928       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1001 18:42:19.227952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:42:22.683410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:42:26.943680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:42:30.541823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:42:33.595647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:42:36.617741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:42:36.627181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1001 18:42:36.627838       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 18:42:36.631131       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-246462_e7a460bd-b7c6-4246-973f-04dc904b2dff!
	I1001 18:42:36.632523       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9d46175c-e14f-4e2a-8a9f-3cab19ff7aab", APIVersion:"v1", ResourceVersion:"571", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-246462_e7a460bd-b7c6-4246-973f-04dc904b2dff became leader
	W1001 18:42:36.637228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:42:36.648327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1001 18:42:36.731935       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-246462_e7a460bd-b7c6-4246-973f-04dc904b2dff!
	W1001 18:42:38.651654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:42:38.656316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:42:40.660790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1001 18:42:40.667385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-246462 -n functional-246462
helpers_test.go:269: (dbg) Run:  kubectl --context functional-246462 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-r8rnr hello-node-connect-7d85dfc575-9dbdh
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-246462 describe pod hello-node-75c85bcc94-r8rnr hello-node-connect-7d85dfc575-9dbdh
helpers_test.go:290: (dbg) kubectl --context functional-246462 describe pod hello-node-75c85bcc94-r8rnr hello-node-connect-7d85dfc575-9dbdh:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-r8rnr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-246462/192.168.49.2
	Start Time:       Wed, 01 Oct 2025 18:43:49 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zfhq6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zfhq6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-r8rnr to functional-246462
	  Normal   Pulling    6m48s (x5 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m48s (x5 over 9m47s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m48s (x5 over 9m47s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m38s (x21 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m38s (x21 over 9m47s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-9dbdh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-246462/192.168.49.2
	Start Time:       Wed, 01 Oct 2025 18:43:34 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jth7c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jth7c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9dbdh to functional-246462
	  Normal   Pulling    7m2s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m2s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m2s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m59s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m59s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (601.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-246462 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-246462 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-r8rnr" [69fc6257-3746-42b7-9e61-d0c33627ab4b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1001 18:45:29.267605  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:45:56.981908  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:50:29.267357  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-246462 -n functional-246462
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-01 18:53:50.709383556 +0000 UTC m=+1364.908912096
functional_test.go:1460: (dbg) Run:  kubectl --context functional-246462 describe po hello-node-75c85bcc94-r8rnr -n default
functional_test.go:1460: (dbg) kubectl --context functional-246462 describe po hello-node-75c85bcc94-r8rnr -n default:
Name:             hello-node-75c85bcc94-r8rnr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-246462/192.168.49.2
Start Time:       Wed, 01 Oct 2025 18:43:49 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zfhq6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zfhq6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-r8rnr to functional-246462
Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-246462 logs hello-node-75c85bcc94-r8rnr -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-246462 logs hello-node-75c85bcc94-r8rnr -n default: exit status 1 (95.53098ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-r8rnr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-246462 logs hello-node-75c85bcc94-r8rnr -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (601.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 service --namespace=default --https --url hello-node: exit status 115 (531.311056ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31428
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-246462 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 service hello-node --url --format={{.IP}}: exit status 115 (548.840699ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-246462 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 service hello-node --url: exit status 115 (542.400897ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31428
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-246462 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31428
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    

Test pass (294/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 38.63
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 37.23
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 184.48
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 10.85
35 TestAddons/parallel/Registry 17.66
36 TestAddons/parallel/RegistryCreds 0.81
38 TestAddons/parallel/InspektorGadget 6.43
39 TestAddons/parallel/MetricsServer 6.31
41 TestAddons/parallel/CSI 45.49
42 TestAddons/parallel/Headlamp 12.25
43 TestAddons/parallel/CloudSpanner 6.56
44 TestAddons/parallel/LocalPath 51.31
45 TestAddons/parallel/NvidiaDevicePlugin 6.56
46 TestAddons/parallel/Yakd 12.26
48 TestAddons/StoppedEnableDisable 12.18
49 TestCertOptions 37.03
50 TestCertExpiration 259.61
52 TestForceSystemdFlag 38.26
53 TestForceSystemdEnv 46.62
59 TestErrorSpam/setup 29.92
60 TestErrorSpam/start 0.78
61 TestErrorSpam/status 1.04
62 TestErrorSpam/pause 1.65
63 TestErrorSpam/unpause 1.83
64 TestErrorSpam/stop 1.44
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.16
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 27.07
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.95
76 TestFunctional/serial/CacheCmd/cache/add_local 1.4
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.14
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 34.59
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.71
87 TestFunctional/serial/LogsFileCmd 1.8
88 TestFunctional/serial/InvalidService 4.81
90 TestFunctional/parallel/ConfigCmd 0.49
91 TestFunctional/parallel/DashboardCmd 10.52
92 TestFunctional/parallel/DryRun 0.57
93 TestFunctional/parallel/InternationalLanguage 0.29
94 TestFunctional/parallel/StatusCmd 1.26
99 TestFunctional/parallel/AddonsCmd 0.2
100 TestFunctional/parallel/PersistentVolumeClaim 24.99
102 TestFunctional/parallel/SSHCmd 0.69
103 TestFunctional/parallel/CpCmd 2.4
105 TestFunctional/parallel/FileSync 0.35
106 TestFunctional/parallel/CertSync 2.15
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.84
114 TestFunctional/parallel/License 0.3
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
128 TestFunctional/parallel/ProfileCmd/profile_list 0.4
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
130 TestFunctional/parallel/MountCmd/any-port 8.78
131 TestFunctional/parallel/MountCmd/specific-port 1.82
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
133 TestFunctional/parallel/ServiceCmd/List 0.61
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
138 TestFunctional/parallel/Version/short 0.06
139 TestFunctional/parallel/Version/components 1.38
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.84
145 TestFunctional/parallel/ImageCommands/Setup 0.67
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.12
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.2
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.22
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.66
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.94
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.73
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 179.45
164 TestMultiControlPlane/serial/DeployApp 8.77
165 TestMultiControlPlane/serial/PingHostFromPods 1.61
166 TestMultiControlPlane/serial/AddWorkerNode 58.28
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
169 TestMultiControlPlane/serial/CopyFile 19.53
170 TestMultiControlPlane/serial/StopSecondaryNode 12.81
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
172 TestMultiControlPlane/serial/RestartSecondaryNode 32.14
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.28
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 120.84
175 TestMultiControlPlane/serial/DeleteSecondaryNode 12.51
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
177 TestMultiControlPlane/serial/StopCluster 35.68
178 TestMultiControlPlane/serial/RestartCluster 67.2
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
180 TestMultiControlPlane/serial/AddSecondaryNode 89.07
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.04
185 TestJSONOutput/start/Command 78.95
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.72
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.65
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.87
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 67.09
211 TestKicCustomNetwork/use_default_bridge_network 36.18
212 TestKicExistingNetwork 37.31
213 TestKicCustomSubnet 32.24
214 TestKicStaticIP 36.07
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 73.88
219 TestMountStart/serial/StartWithMountFirst 7.21
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 6.77
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.64
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.21
226 TestMountStart/serial/RestartStopped 7.94
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 132.87
231 TestMultiNode/serial/DeployApp2Nodes 7.58
232 TestMultiNode/serial/PingHostFrom2Pods 0.98
233 TestMultiNode/serial/AddNode 55.88
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.28
237 TestMultiNode/serial/StopNode 2.31
238 TestMultiNode/serial/StartAfterStop 7.96
239 TestMultiNode/serial/RestartKeepsNodes 72.95
240 TestMultiNode/serial/DeleteNode 5.7
241 TestMultiNode/serial/StopMultiNode 23.86
242 TestMultiNode/serial/RestartMultiNode 53.72
243 TestMultiNode/serial/ValidateNameConflict 31.68
248 TestPreload 128.68
250 TestScheduledStopUnix 105.81
253 TestInsufficientStorage 10.5
254 TestRunningBinaryUpgrade 58.55
256 TestKubernetesUpgrade 355.32
257 TestMissingContainerUpgrade 136.76
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 41.17
261 TestNoKubernetes/serial/StartWithStopK8s 35.27
262 TestNoKubernetes/serial/Start 7.89
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
264 TestNoKubernetes/serial/ProfileList 0.69
265 TestNoKubernetes/serial/Stop 1.2
266 TestNoKubernetes/serial/StartNoArgs 7.57
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
268 TestStoppedBinaryUpgrade/Setup 8.5
269 TestStoppedBinaryUpgrade/Upgrade 62.7
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
279 TestPause/serial/Start 82.52
280 TestPause/serial/SecondStartNoReconfiguration 27.33
281 TestPause/serial/Pause 0.86
282 TestPause/serial/VerifyStatus 0.4
283 TestPause/serial/Unpause 0.7
284 TestPause/serial/PauseAgain 0.85
285 TestPause/serial/DeletePaused 2.65
286 TestPause/serial/VerifyDeletedResources 0.38
294 TestNetworkPlugins/group/false 3.73
299 TestStartStop/group/old-k8s-version/serial/FirstStart 60.25
300 TestStartStop/group/old-k8s-version/serial/DeployApp 12.43
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.23
302 TestStartStop/group/old-k8s-version/serial/Stop 11.94
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
304 TestStartStop/group/old-k8s-version/serial/SecondStart 55.98
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
308 TestStartStop/group/old-k8s-version/serial/Pause 3.4
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.17
312 TestStartStop/group/embed-certs/serial/FirstStart 80.67
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
315 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.16
318 TestStartStop/group/embed-certs/serial/DeployApp 11.44
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.43
320 TestStartStop/group/embed-certs/serial/Stop 12.62
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
322 TestStartStop/group/embed-certs/serial/SecondStart 55.82
323 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
325 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
326 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.15
328 TestStartStop/group/no-preload/serial/FirstStart 70.4
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
332 TestStartStop/group/embed-certs/serial/Pause 4.02
334 TestStartStop/group/newest-cni/serial/FirstStart 42.2
335 TestStartStop/group/no-preload/serial/DeployApp 10.37
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
338 TestStartStop/group/newest-cni/serial/Stop 1.39
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/newest-cni/serial/SecondStart 15.17
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
342 TestStartStop/group/no-preload/serial/Stop 12.13
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
346 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.33
347 TestStartStop/group/newest-cni/serial/Pause 3.37
348 TestStartStop/group/no-preload/serial/SecondStart 55.01
349 TestNetworkPlugins/group/auto/Start 86.46
350 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
351 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
352 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
353 TestStartStop/group/no-preload/serial/Pause 3.18
354 TestNetworkPlugins/group/kindnet/Start 83.16
355 TestNetworkPlugins/group/auto/KubeletFlags 0.38
356 TestNetworkPlugins/group/auto/NetCatPod 11.4
357 TestNetworkPlugins/group/auto/DNS 0.21
358 TestNetworkPlugins/group/auto/Localhost 0.19
359 TestNetworkPlugins/group/auto/HairPin 0.22
360 TestNetworkPlugins/group/calico/Start 60.7
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
363 TestNetworkPlugins/group/kindnet/NetCatPod 12.3
364 TestNetworkPlugins/group/kindnet/DNS 0.31
365 TestNetworkPlugins/group/kindnet/Localhost 0.24
366 TestNetworkPlugins/group/kindnet/HairPin 0.33
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.43
369 TestNetworkPlugins/group/calico/NetCatPod 12.4
370 TestNetworkPlugins/group/custom-flannel/Start 61.86
371 TestNetworkPlugins/group/calico/DNS 0.4
372 TestNetworkPlugins/group/calico/Localhost 0.26
373 TestNetworkPlugins/group/calico/HairPin 0.24
374 TestNetworkPlugins/group/enable-default-cni/Start 80.09
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.44
377 TestNetworkPlugins/group/custom-flannel/DNS 0.2
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
380 TestNetworkPlugins/group/flannel/Start 63.1
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
386 TestNetworkPlugins/group/bridge/Start 70.41
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
389 TestNetworkPlugins/group/flannel/NetCatPod 12.3
390 TestNetworkPlugins/group/flannel/DNS 0.2
391 TestNetworkPlugins/group/flannel/Localhost 0.15
392 TestNetworkPlugins/group/flannel/HairPin 0.24
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
394 TestNetworkPlugins/group/bridge/NetCatPod 9.3
395 TestNetworkPlugins/group/bridge/DNS 0.25
396 TestNetworkPlugins/group/bridge/Localhost 0.17
397 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (38.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-512587 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-512587 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (38.633890151s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (38.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1001 18:31:44.474124  290016 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1001 18:31:44.474204  290016 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21631-288146/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-512587
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-512587: exit status 85 (88.585506ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-512587 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-512587 │ jenkins │ v1.37.0 │ 01 Oct 25 18:31 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/01 18:31:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:31:05.882992  290021 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:31:05.883189  290021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:31:05.883220  290021 out.go:374] Setting ErrFile to fd 2...
	I1001 18:31:05.883244  290021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:31:05.883534  290021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
	W1001 18:31:05.883701  290021 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21631-288146/.minikube/config/config.json: open /home/jenkins/minikube-integration/21631-288146/.minikube/config/config.json: no such file or directory
	I1001 18:31:05.884146  290021 out.go:368] Setting JSON to true
	I1001 18:31:05.884984  290021 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4418,"bootTime":1759339048,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1001 18:31:05.885083  290021 start.go:140] virtualization:  
	I1001 18:31:05.888976  290021 out.go:99] [download-only-512587] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1001 18:31:05.889185  290021 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21631-288146/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 18:31:05.889298  290021 notify.go:220] Checking for updates...
	I1001 18:31:05.893245  290021 out.go:171] MINIKUBE_LOCATION=21631
	I1001 18:31:05.896210  290021 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:31:05.899109  290021 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	I1001 18:31:05.902000  290021 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	I1001 18:31:05.904976  290021 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1001 18:31:05.910673  290021 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 18:31:05.910996  290021 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 18:31:05.932049  290021 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1001 18:31:05.932169  290021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 18:31:05.993546  290021 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-01 18:31:05.983611202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 18:31:05.993664  290021 docker.go:318] overlay module found
	I1001 18:31:05.996607  290021 out.go:99] Using the docker driver based on user configuration
	I1001 18:31:05.996641  290021 start.go:304] selected driver: docker
	I1001 18:31:05.996648  290021 start.go:921] validating driver "docker" against <nil>
	I1001 18:31:05.996793  290021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 18:31:06.054958  290021 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-01 18:31:06.046050912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 18:31:06.055115  290021 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1001 18:31:06.055386  290021 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1001 18:31:06.055552  290021 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 18:31:06.058650  290021 out.go:171] Using Docker driver with root privileges
	I1001 18:31:06.061596  290021 cni.go:84] Creating CNI manager for ""
	I1001 18:31:06.061675  290021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 18:31:06.061690  290021 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 18:31:06.061779  290021 start.go:348] cluster config:
	{Name:download-only-512587 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-512587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:31:06.064650  290021 out.go:99] Starting "download-only-512587" primary control-plane node in "download-only-512587" cluster
	I1001 18:31:06.064677  290021 cache.go:123] Beginning downloading kic base image for docker with crio
	I1001 18:31:06.067628  290021 out.go:99] Pulling base image v0.0.48 ...
	I1001 18:31:06.067659  290021 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1001 18:31:06.067790  290021 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I1001 18:31:06.082945  290021 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I1001 18:31:06.083133  290021 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I1001 18:31:06.083246  290021 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I1001 18:31:06.127031  290021 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1001 18:31:06.127067  290021 cache.go:58] Caching tarball of preloaded images
	I1001 18:31:06.127224  290021 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1001 18:31:06.130645  290021 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1001 18:31:06.130689  290021 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1001 18:31:06.223489  290021 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1001 18:31:06.223654  290021 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21631-288146/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1001 18:31:14.778765  290021 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	
	
	* The control-plane node download-only-512587 host does not exist
	  To start a cluster, run: "minikube start -p download-only-512587"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-512587
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (37.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-472202 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-472202 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (37.228394592s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (37.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1001 18:32:22.140204  290016 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1001 18:32:22.140243  290016 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21631-288146/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-472202
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-472202: exit status 85 (96.221864ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-512587 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-512587 │ jenkins │ v1.37.0 │ 01 Oct 25 18:31 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 01 Oct 25 18:31 UTC │ 01 Oct 25 18:31 UTC │
	│ delete  │ -p download-only-512587                                                                                                                                                   │ download-only-512587 │ jenkins │ v1.37.0 │ 01 Oct 25 18:31 UTC │ 01 Oct 25 18:31 UTC │
	│ start   │ -o=json --download-only -p download-only-472202 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-472202 │ jenkins │ v1.37.0 │ 01 Oct 25 18:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/01 18:31:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 18:31:44.956363  290219 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:31:44.956484  290219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:31:44.956494  290219 out.go:374] Setting ErrFile to fd 2...
	I1001 18:31:44.956500  290219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:31:44.956853  290219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
	I1001 18:31:44.957350  290219 out.go:368] Setting JSON to true
	I1001 18:31:44.958191  290219 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4457,"bootTime":1759339048,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1001 18:31:44.958278  290219 start.go:140] virtualization:  
	I1001 18:31:44.961602  290219 out.go:99] [download-only-472202] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1001 18:31:44.961890  290219 notify.go:220] Checking for updates...
	I1001 18:31:44.965538  290219 out.go:171] MINIKUBE_LOCATION=21631
	I1001 18:31:44.968608  290219 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:31:44.971527  290219 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	I1001 18:31:44.974398  290219 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	I1001 18:31:44.977355  290219 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1001 18:31:44.983125  290219 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 18:31:44.983401  290219 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 18:31:45.011093  290219 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1001 18:31:45.011232  290219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 18:31:45.115893  290219 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-01 18:31:45.103239539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 18:31:45.116019  290219 docker.go:318] overlay module found
	I1001 18:31:45.119184  290219 out.go:99] Using the docker driver based on user configuration
	I1001 18:31:45.119254  290219 start.go:304] selected driver: docker
	I1001 18:31:45.119273  290219 start.go:921] validating driver "docker" against <nil>
	I1001 18:31:45.119413  290219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 18:31:45.199416  290219 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-01 18:31:45.189084707 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 18:31:45.199636  290219 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1001 18:31:45.200028  290219 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1001 18:31:45.200273  290219 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 18:31:45.203426  290219 out.go:171] Using Docker driver with root privileges
	I1001 18:31:45.206741  290219 cni.go:84] Creating CNI manager for ""
	I1001 18:31:45.206848  290219 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1001 18:31:45.206861  290219 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 18:31:45.206961  290219 start.go:348] cluster config:
	{Name:download-only-472202 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-472202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:31:45.210067  290219 out.go:99] Starting "download-only-472202" primary control-plane node in "download-only-472202" cluster
	I1001 18:31:45.210133  290219 cache.go:123] Beginning downloading kic base image for docker with crio
	I1001 18:31:45.226544  290219 out.go:99] Pulling base image v0.0.48 ...
	I1001 18:31:45.226584  290219 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:31:45.226808  290219 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I1001 18:31:45.245538  290219 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I1001 18:31:45.245849  290219 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I1001 18:31:45.245882  290219 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I1001 18:31:45.245889  290219 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I1001 18:31:45.245900  290219 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I1001 18:31:45.289362  290219 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1001 18:31:45.289393  290219 cache.go:58] Caching tarball of preloaded images
	I1001 18:31:45.289617  290219 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1001 18:31:45.293118  290219 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1001 18:31:45.293185  290219 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1001 18:31:45.386743  290219 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1001 18:31:45.386840  290219 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21631-288146/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-472202 host does not exist
	  To start a cluster, run: "minikube start -p download-only-472202"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-472202
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1001 18:32:23.279385  290016 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-755747 --alsologtostderr --binary-mirror http://127.0.0.1:44469 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-755747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-755747
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-157757
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-157757: exit status 85 (83.623233ms)

                                                
                                                
-- stdout --
	* Profile "addons-157757" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-157757"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-157757
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-157757: exit status 85 (81.65245ms)

                                                
                                                
-- stdout --
	* Profile "addons-157757" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-157757"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (184.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-157757 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-157757 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m4.48161946s)
--- PASS: TestAddons/Setup (184.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-157757 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-157757 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-157757 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-157757 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [799f7d22-ea81-40de-aec9-7241432a6300] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [799f7d22-ea81-40de-aec9-7241432a6300] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003463494s
addons_test.go:694: (dbg) Run:  kubectl --context addons-157757 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-157757 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-157757 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-157757 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.370136ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-hwz6l" [e083da18-8bb1-4b19-a43f-e6ec60f32ec0] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002840452s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-9fbtr" [fffe62e8-26fb-42b2-bf14-e16fe97877ea] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004967829s
addons_test.go:392: (dbg) Run:  kubectl --context addons-157757 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-157757 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-157757 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.081022224s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 ip
2025/10/01 18:36:04 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-157757 addons disable registry --alsologtostderr -v=1: (1.254907374s)
--- PASS: TestAddons/parallel/Registry (17.66s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.269439ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-157757
addons_test.go:332: (dbg) Run:  kubectl --context addons-157757 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.43s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-g9vlb" [aed54a92-8727-4ba9-acf3-416ababe3662] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003841389s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.43s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.708623ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-tj2w5" [d2bbd030-0a2c-40dd-8cf1-66db6f6d2ca4] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003620934s
addons_test.go:463: (dbg) Run:  kubectl --context addons-157757 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-157757 addons disable metrics-server --alsologtostderr -v=1: (1.179669504s)
--- PASS: TestAddons/parallel/MetricsServer (6.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1001 18:36:00.667621  290016 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1001 18:36:00.672282  290016 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1001 18:36:00.672315  290016 kapi.go:107] duration metric: took 4.706778ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.718971ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-157757 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-157757 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [7558b5c9-070d-4293-aefe-eacf92689046] Pending
helpers_test.go:352: "task-pv-pod" [7558b5c9-070d-4293-aefe-eacf92689046] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [7558b5c9-070d-4293-aefe-eacf92689046] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.005573603s
addons_test.go:572: (dbg) Run:  kubectl --context addons-157757 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-157757 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-157757 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-157757 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-157757 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-157757 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-157757 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [91d34e32-76b3-4324-9c2a-45aa300b836a] Pending
helpers_test.go:352: "task-pv-pod-restore" [91d34e32-76b3-4324-9c2a-45aa300b836a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [91d34e32-76b3-4324-9c2a-45aa300b836a] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004243718s
addons_test.go:614: (dbg) Run:  kubectl --context addons-157757 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-157757 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-157757 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-157757 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.91892188s)
--- PASS: TestAddons/parallel/CSI (45.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-157757 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-p48kw" [354a5f59-f566-44bc-8cff-2af60161efa4] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-p48kw" [354a5f59-f566-44bc-8cff-2af60161efa4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-p48kw" [354a5f59-f566-44bc-8cff-2af60161efa4] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005770219s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.25s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-9c22c" [c03126ae-f45e-4b0a-9166-9fd1d197b617] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003233999s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-157757 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-157757 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-157757 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4f23a393-a956-4e60-bbe4-00cf5ec2b338] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [4f23a393-a956-4e60-bbe4-00cf5ec2b338] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [4f23a393-a956-4e60-bbe4-00cf5ec2b338] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003899605s
addons_test.go:967: (dbg) Run:  kubectl --context addons-157757 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 ssh "cat /opt/local-path-provisioner/pvc-e668be2f-67e8-4fa9-99d1-db2c2da3b417_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-157757 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-157757 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-157757 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.991842902s)
--- PASS: TestAddons/parallel/LocalPath (51.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-ltnvc" [70d51da8-b222-413a-b7e3-18518bfefd2a] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003973713s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-7lxnz" [aa0db454-eaf4-4061-a69c-bc90079762f2] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003884097s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-157757 addons disable yakd --alsologtostderr -v=1: (6.256693035s)
--- PASS: TestAddons/parallel/Yakd (12.26s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-157757
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-157757: (11.898604391s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-157757
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-157757
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-157757
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (37.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-914729 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-914729 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.296974273s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-914729 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-914729 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-914729 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-914729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-914729
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-914729: (2.040726194s)
--- PASS: TestCertOptions (37.03s)

                                                
                                    
x
+
TestCertExpiration (259.61s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-877854 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1001 19:30:29.267836  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-877854 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (45.737865601s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-877854 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-877854 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (31.051871194s)
helpers_test.go:175: Cleaning up "cert-expiration-877854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-877854
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-877854: (2.823268133s)
--- PASS: TestCertExpiration (259.61s)

                                                
                                    
x
+
TestForceSystemdFlag (38.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-247291 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-247291 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.967892265s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-247291 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-247291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-247291
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-247291: (2.833062625s)
--- PASS: TestForceSystemdFlag (38.26s)

                                                
                                    
x
+
TestForceSystemdEnv (46.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-871484 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1001 19:30:12.347214  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-871484 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.904305021s)
helpers_test.go:175: Cleaning up "force-systemd-env-871484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-871484
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-871484: (2.713486725s)
--- PASS: TestForceSystemdEnv (46.62s)

                                                
                                    
x
+
TestErrorSpam/setup (29.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-885406 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-885406 --driver=docker  --container-runtime=crio
E1001 18:40:29.277245  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:40:29.283632  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:40:29.294930  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:40:29.316262  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:40:29.357708  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:40:29.439205  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:40:29.600671  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:40:29.922004  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:40:30.564119  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:40:31.846107  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-885406 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-885406 --driver=docker  --container-runtime=crio: (29.920411002s)
--- PASS: TestErrorSpam/setup (29.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 pause
E1001 18:40:34.407768  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 stop: (1.242407616s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-885406 --log_dir /tmp/nospam-885406 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21631-288146/.minikube/files/etc/test/nested/copy/290016/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-246462 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1001 18:40:49.771802  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:41:10.253098  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:41:51.215520  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-246462 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.157471105s)
--- PASS: TestFunctional/serial/StartWithProxy (80.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1001 18:42:03.858938  290016 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-246462 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-246462 --alsologtostderr -v=8: (27.066521902s)
functional_test.go:678: soft start took 27.074578403s for "functional-246462" cluster.
I1001 18:42:30.925792  290016 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (27.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-246462 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-246462 cache add registry.k8s.io/pause:3.1: (1.291497818s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-246462 cache add registry.k8s.io/pause:3.3: (1.370671071s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-246462 cache add registry.k8s.io/pause:latest: (1.287815995s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-246462 /tmp/TestFunctionalserialCacheCmdcacheadd_local3820329379/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 cache add minikube-local-cache-test:functional-246462
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 cache delete minikube-local-cache-test:functional-246462
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-246462
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (312.879512ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-246462 cache reload: (1.184645555s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 kubectl -- --context functional-246462 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-246462 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-246462 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1001 18:43:13.139604  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-246462 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.589017013s)
functional_test.go:776: restart took 34.58912478s for "functional-246462" cluster.
I1001 18:43:14.022159  290016 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (34.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-246462 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-246462 logs: (1.71173099s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 logs --file /tmp/TestFunctionalserialLogsFileCmd1226426678/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-246462 logs --file /tmp/TestFunctionalserialLogsFileCmd1226426678/001/logs.txt: (1.800926945s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.80s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-246462 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-246462
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-246462: exit status 115 (411.425874ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30690 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-246462 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-246462 delete -f testdata/invalidsvc.yaml: (1.1449052s)
--- PASS: TestFunctional/serial/InvalidService (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 config get cpus: exit status 14 (83.734123ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 config get cpus: exit status 14 (75.17592ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-246462 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-246462 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 319747: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-246462 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-246462 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (243.679304ms)

                                                
                                                
-- stdout --
	* [functional-246462] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 18:53:52.979744  319196 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:53:52.979880  319196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:53:52.979892  319196 out.go:374] Setting ErrFile to fd 2...
	I1001 18:53:52.979897  319196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:53:52.980165  319196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
	I1001 18:53:52.980569  319196 out.go:368] Setting JSON to false
	I1001 18:53:52.981474  319196 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5785,"bootTime":1759339048,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1001 18:53:52.981543  319196 start.go:140] virtualization:  
	I1001 18:53:52.984676  319196 out.go:179] * [functional-246462] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1001 18:53:52.987525  319196 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 18:53:52.987584  319196 notify.go:220] Checking for updates...
	I1001 18:53:52.990571  319196 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:53:52.993873  319196 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	I1001 18:53:52.996771  319196 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	I1001 18:53:52.999568  319196 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 18:53:53.002413  319196 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:53:53.005851  319196 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:53:53.006597  319196 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 18:53:53.051381  319196 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1001 18:53:53.051529  319196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 18:53:53.146377  319196 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-01 18:53:53.130387434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 18:53:53.146520  319196 docker.go:318] overlay module found
	I1001 18:53:53.149729  319196 out.go:179] * Using the docker driver based on existing profile
	I1001 18:53:53.152826  319196 start.go:304] selected driver: docker
	I1001 18:53:53.152847  319196 start.go:921] validating driver "docker" against &{Name:functional-246462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-246462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:53:53.152950  319196 start.go:932] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 18:53:53.156630  319196 out.go:203] 
	W1001 18:53:53.159425  319196 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1001 18:53:53.162405  319196 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-246462 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-246462 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-246462 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (290.186038ms)

                                                
                                                
-- stdout --
	* [functional-246462] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 18:53:52.710419  319108 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:53:52.710730  319108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:53:52.710740  319108 out.go:374] Setting ErrFile to fd 2...
	I1001 18:53:52.710745  319108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:53:52.711915  319108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
	I1001 18:53:52.712323  319108 out.go:368] Setting JSON to false
	I1001 18:53:52.713220  319108 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5785,"bootTime":1759339048,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1001 18:53:52.713294  319108 start.go:140] virtualization:  
	I1001 18:53:52.716643  319108 out.go:179] * [functional-246462] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1001 18:53:52.720801  319108 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 18:53:52.720899  319108 notify.go:220] Checking for updates...
	I1001 18:53:52.734777  319108 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 18:53:52.737690  319108 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	I1001 18:53:52.740628  319108 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	I1001 18:53:52.745152  319108 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 18:53:52.748046  319108 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 18:53:52.751433  319108 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:53:52.751985  319108 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 18:53:52.795527  319108 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1001 18:53:52.795649  319108 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 18:53:52.901519  319108 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-01 18:53:52.888761918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 18:53:52.901620  319108 docker.go:318] overlay module found
	I1001 18:53:52.904708  319108 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1001 18:53:52.907675  319108 start.go:304] selected driver: docker
	I1001 18:53:52.907697  319108 start.go:921] validating driver "docker" against &{Name:functional-246462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-246462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 18:53:52.907827  319108 start.go:932] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 18:53:52.911424  319108 out.go:203] 
	W1001 18:53:52.915225  319108 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1001 18:53:52.918183  319108 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [86765e31-1cfa-45da-88b5-221ee9f60924] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003903531s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-246462 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-246462 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-246462 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-246462 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5c92856d-3a0d-4f0a-ac56-343891fb9309] Pending
helpers_test.go:352: "sp-pod" [5c92856d-3a0d-4f0a-ac56-343891fb9309] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5c92856d-3a0d-4f0a-ac56-343891fb9309] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004459955s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-246462 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-246462 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-246462 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [03607c44-731b-46e2-8292-a70e18eb5156] Pending
helpers_test.go:352: "sp-pod" [03607c44-731b-46e2-8292-a70e18eb5156] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003592156s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-246462 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh -n functional-246462 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 cp functional-246462:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2763953101/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh -n functional-246462 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh -n functional-246462 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/290016/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "sudo cat /etc/test/nested/copy/290016/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/290016.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "sudo cat /etc/ssl/certs/290016.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/290016.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "sudo cat /usr/share/ca-certificates/290016.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2900162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "sudo cat /etc/ssl/certs/2900162.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2900162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "sudo cat /usr/share/ca-certificates/2900162.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-246462 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 ssh "sudo systemctl is-active docker": exit status 1 (374.743481ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 ssh "sudo systemctl is-active containerd": exit status 1 (467.608257ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-246462 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-246462 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-246462 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-246462 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 315671: os: process already finished
helpers_test.go:519: unable to terminate pid 315488: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-246462 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-246462 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [26daac0a-577d-4a4c-b1d5-2911a9290f02] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [26daac0a-577d-4a4c-b1d5-2911a9290f02] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00360084s
I1001 18:43:33.728081  290016 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-246462 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.154.55 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-246462 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "338.646325ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "63.577913ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "363.973251ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.069298ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-246462 /tmp/TestFunctionalparallelMountCmdany-port977892725/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759344819207773912" to /tmp/TestFunctionalparallelMountCmdany-port977892725/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759344819207773912" to /tmp/TestFunctionalparallelMountCmdany-port977892725/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759344819207773912" to /tmp/TestFunctionalparallelMountCmdany-port977892725/001/test-1759344819207773912
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (334.171621ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 18:53:39.542977  290016 retry.go:31] will retry after 459.225567ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  1 18:53 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  1 18:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  1 18:53 test-1759344819207773912
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh cat /mount-9p/test-1759344819207773912
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-246462 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e839ab84-1056-4cea-a28b-34d7fde753d2] Pending
helpers_test.go:352: "busybox-mount" [e839ab84-1056-4cea-a28b-34d7fde753d2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e839ab84-1056-4cea-a28b-34d7fde753d2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e839ab84-1056-4cea-a28b-34d7fde753d2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003460553s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-246462 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-246462 /tmp/TestFunctionalparallelMountCmdany-port977892725/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-246462 /tmp/TestFunctionalparallelMountCmdspecific-port2946318425/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.680486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 18:53:48.335847  290016 retry.go:31] will retry after 457.68918ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-246462 /tmp/TestFunctionalparallelMountCmdspecific-port2946318425/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 ssh "sudo umount -f /mount-9p": exit status 1 (260.28132ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-246462 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-246462 /tmp/TestFunctionalparallelMountCmdspecific-port2946318425/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-246462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1857305426/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-246462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1857305426/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-246462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1857305426/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-246462 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-246462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1857305426/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-246462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1857305426/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-246462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1857305426/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 service list -o json
functional_test.go:1504: Took "602.362852ms" to run "out/minikube-linux-arm64 -p functional-246462 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-246462 version -o=json --components: (1.378127675s)
--- PASS: TestFunctional/parallel/Version/components (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-246462 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-246462
localhost/kicbase/echo-server:functional-246462
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-246462 image ls --format short --alsologtostderr:
I1001 18:54:07.904369  321560 out.go:360] Setting OutFile to fd 1 ...
I1001 18:54:07.904570  321560 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 18:54:07.904594  321560 out.go:374] Setting ErrFile to fd 2...
I1001 18:54:07.904614  321560 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 18:54:07.905005  321560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
I1001 18:54:07.905983  321560 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 18:54:07.906161  321560 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 18:54:07.906995  321560 cli_runner.go:164] Run: docker container inspect functional-246462 --format={{.State.Status}}
I1001 18:54:07.929347  321560 ssh_runner.go:195] Run: systemctl --version
I1001 18:54:07.929406  321560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
I1001 18:54:07.951558  321560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
I1001 18:54:08.058224  321560 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-246462 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/kicbase/echo-server           │ functional-246462  │ ce2d2cda2d858 │ 4.79MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ alpine             │ 35f3cbee4fb77 │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 0777d15d89ece │ 202MB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ localhost/minikube-local-cache-test     │ functional-246462  │ a7fa2c6d05822 │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-246462 image ls --format table --alsologtostderr:
I1001 18:54:08.620554  321748 out.go:360] Setting OutFile to fd 1 ...
I1001 18:54:08.620786  321748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 18:54:08.620816  321748 out.go:374] Setting ErrFile to fd 2...
I1001 18:54:08.620835  321748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 18:54:08.621139  321748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
I1001 18:54:08.621908  321748 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 18:54:08.622074  321748 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 18:54:08.622583  321748 cli_runner.go:164] Run: docker container inspect functional-246462 --format={{.State.Status}}
I1001 18:54:08.645105  321748 ssh_runner.go:195] Run: systemctl --version
I1001 18:54:08.645158  321748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
I1001 18:54:08.672657  321748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
I1001 18:54:08.775384  321748 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-246462 image ls --format json --alsologtostderr:
[{"id":"0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b","repoDigests":["docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc","docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992"],"repoTags":["docker.io/library/nginx:latest"],"size":"202036629"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e4
38cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:4
9260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-246462"],"size":"4788229"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03
994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","
docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54348302"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5
d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"a7fa2c6d058226071c610fe609ae8c2841531fe810d6480f4f2130ff50b2bebb","repoDigests":["localhost/minikube-local-cache-test@sha256:079aa9c92d4cc73c587fcb248738f3ef83317ef14d2de1d74d254c2c38aa07d3"],"repoTags":["localhost/minikube-local-cache-test:functional-246462"],"size":"3330"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a
0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-246462 image ls --format json --alsologtostderr:
I1001 18:54:08.352774  321669 out.go:360] Setting OutFile to fd 1 ...
I1001 18:54:08.352893  321669 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 18:54:08.352903  321669 out.go:374] Setting ErrFile to fd 2...
I1001 18:54:08.352909  321669 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 18:54:08.353168  321669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
I1001 18:54:08.354841  321669 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 18:54:08.355068  321669 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 18:54:08.355746  321669 cli_runner.go:164] Run: docker container inspect functional-246462 --format={{.State.Status}}
I1001 18:54:08.378490  321669 ssh_runner.go:195] Run: systemctl --version
I1001 18:54:08.378544  321669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
I1001 18:54:08.396041  321669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
I1001 18:54:08.495831  321669 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-246462 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac
repoTags:
- docker.io/library/nginx:alpine
size: "54348302"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b
repoDigests:
- docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc
- docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992
repoTags:
- docker.io/library/nginx:latest
size: "202036629"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-246462
size: "4788229"
- id: a7fa2c6d058226071c610fe609ae8c2841531fe810d6480f4f2130ff50b2bebb
repoDigests:
- localhost/minikube-local-cache-test@sha256:079aa9c92d4cc73c587fcb248738f3ef83317ef14d2de1d74d254c2c38aa07d3
repoTags:
- localhost/minikube-local-cache-test:functional-246462
size: "3330"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-246462 image ls --format yaml --alsologtostderr:
I1001 18:54:08.025013  321593 out.go:360] Setting OutFile to fd 1 ...
I1001 18:54:08.025271  321593 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 18:54:08.025301  321593 out.go:374] Setting ErrFile to fd 2...
I1001 18:54:08.025320  321593 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 18:54:08.025608  321593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
I1001 18:54:08.026311  321593 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 18:54:08.026491  321593 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 18:54:08.027070  321593 cli_runner.go:164] Run: docker container inspect functional-246462 --format={{.State.Status}}
I1001 18:54:08.045242  321593 ssh_runner.go:195] Run: systemctl --version
I1001 18:54:08.045293  321593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
I1001 18:54:08.071963  321593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
I1001 18:54:08.171512  321593 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-246462 ssh pgrep buildkitd: exit status 1 (306.433698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image build -t localhost/my-image:functional-246462 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-246462 image build -t localhost/my-image:functional-246462 testdata/build --alsologtostderr: (3.289223957s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-246462 image build -t localhost/my-image:functional-246462 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 946f199715f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-246462
--> 45b49e35e60
Successfully tagged localhost/my-image:functional-246462
45b49e35e6095a9b8c8da0c0d6e4eb79ba73606f69eba1baa6772cd0a8bb6816
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-246462 image build -t localhost/my-image:functional-246462 testdata/build --alsologtostderr:
I1001 18:54:08.480324  321716 out.go:360] Setting OutFile to fd 1 ...
I1001 18:54:08.481181  321716 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 18:54:08.481222  321716 out.go:374] Setting ErrFile to fd 2...
I1001 18:54:08.481246  321716 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1001 18:54:08.481543  321716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
I1001 18:54:08.482249  321716 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 18:54:08.483191  321716 config.go:182] Loaded profile config "functional-246462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1001 18:54:08.483768  321716 cli_runner.go:164] Run: docker container inspect functional-246462 --format={{.State.Status}}
I1001 18:54:08.504920  321716 ssh_runner.go:195] Run: systemctl --version
I1001 18:54:08.504986  321716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-246462
I1001 18:54:08.527294  321716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/functional-246462/id_rsa Username:docker}
I1001 18:54:08.635299  321716 build_images.go:161] Building image from path: /tmp/build.239692062.tar
I1001 18:54:08.635377  321716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1001 18:54:08.646379  321716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.239692062.tar
I1001 18:54:08.650232  321716 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.239692062.tar: stat -c "%s %y" /var/lib/minikube/build/build.239692062.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.239692062.tar': No such file or directory
I1001 18:54:08.650263  321716 ssh_runner.go:362] scp /tmp/build.239692062.tar --> /var/lib/minikube/build/build.239692062.tar (3072 bytes)
I1001 18:54:08.680928  321716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.239692062
I1001 18:54:08.691966  321716 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.239692062 -xf /var/lib/minikube/build/build.239692062.tar
I1001 18:54:08.704842  321716 crio.go:315] Building image: /var/lib/minikube/build/build.239692062
I1001 18:54:08.704925  321716 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-246462 /var/lib/minikube/build/build.239692062 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1001 18:54:11.685469  321716 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-246462 /var/lib/minikube/build/build.239692062 --cgroup-manager=cgroupfs: (2.980511235s)
I1001 18:54:11.685535  321716 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.239692062
I1001 18:54:11.694982  321716 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.239692062.tar
I1001 18:54:11.703736  321716 build_images.go:217] Built localhost/my-image:functional-246462 from /tmp/build.239692062.tar
I1001 18:54:11.703769  321716 build_images.go:133] succeeded building to: functional-246462
I1001 18:54:11.703775  321716 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-246462
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image load --daemon kicbase/echo-server:functional-246462 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-246462 image load --daemon kicbase/echo-server:functional-246462 --alsologtostderr: (1.813562166s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image load --daemon kicbase/echo-server:functional-246462 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-246462
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image load --daemon kicbase/echo-server:functional-246462 --alsologtostderr
2025/10/01 18:54:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image save kicbase/echo-server:functional-246462 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image rm kicbase/echo-server:functional-246462 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-246462
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-246462 image save --daemon kicbase/echo-server:functional-246462 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-246462
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.73s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-246462
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-246462
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-246462
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (179.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1001 18:55:29.267102  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:56:52.343261  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m58.609085683s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (179.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 kubectl -- rollout status deployment/busybox: (5.94298838s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-7rkds -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-c698x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-fnnnf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-7rkds -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-c698x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-fnnnf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-7rkds -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-c698x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-fnnnf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-7rkds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-7rkds -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-c698x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-c698x -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-fnnnf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 kubectl -- exec busybox-7b57f96db7-fnnnf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 node add --alsologtostderr -v 5: (57.272779614s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5: (1.011801998s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-131098 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.02267358s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 status --output json --alsologtostderr -v 5
E1001 18:58:24.269831  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:58:24.276265  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:58:24.287565  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:58:24.308962  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:58:24.350253  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:58:24.431929  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:58:24.594094  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 18:58:24.916189  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp testdata/cp-test.txt ha-131098:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098 "sudo cat /home/docker/cp-test.txt"
E1001 18:58:25.559007  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3473980390/001/cp-test_ha-131098.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098:/home/docker/cp-test.txt ha-131098-m02:/home/docker/cp-test_ha-131098_ha-131098-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098 "sudo cat /home/docker/cp-test.txt"
E1001 18:58:26.840249  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m02 "sudo cat /home/docker/cp-test_ha-131098_ha-131098-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098:/home/docker/cp-test.txt ha-131098-m03:/home/docker/cp-test_ha-131098_ha-131098-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m03 "sudo cat /home/docker/cp-test_ha-131098_ha-131098-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098:/home/docker/cp-test.txt ha-131098-m04:/home/docker/cp-test_ha-131098_ha-131098-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m04 "sudo cat /home/docker/cp-test_ha-131098_ha-131098-m04.txt"
E1001 18:58:29.401575  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp testdata/cp-test.txt ha-131098-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3473980390/001/cp-test_ha-131098-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m02:/home/docker/cp-test.txt ha-131098:/home/docker/cp-test_ha-131098-m02_ha-131098.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098 "sudo cat /home/docker/cp-test_ha-131098-m02_ha-131098.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m02:/home/docker/cp-test.txt ha-131098-m03:/home/docker/cp-test_ha-131098-m02_ha-131098-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m03 "sudo cat /home/docker/cp-test_ha-131098-m02_ha-131098-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m02:/home/docker/cp-test.txt ha-131098-m04:/home/docker/cp-test_ha-131098-m02_ha-131098-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m04 "sudo cat /home/docker/cp-test_ha-131098-m02_ha-131098-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp testdata/cp-test.txt ha-131098-m03:/home/docker/cp-test.txt
E1001 18:58:34.522928  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3473980390/001/cp-test_ha-131098-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m03:/home/docker/cp-test.txt ha-131098:/home/docker/cp-test_ha-131098-m03_ha-131098.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098 "sudo cat /home/docker/cp-test_ha-131098-m03_ha-131098.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m03:/home/docker/cp-test.txt ha-131098-m02:/home/docker/cp-test_ha-131098-m03_ha-131098-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m02 "sudo cat /home/docker/cp-test_ha-131098-m03_ha-131098-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m03:/home/docker/cp-test.txt ha-131098-m04:/home/docker/cp-test_ha-131098-m03_ha-131098-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m04 "sudo cat /home/docker/cp-test_ha-131098-m03_ha-131098-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp testdata/cp-test.txt ha-131098-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3473980390/001/cp-test_ha-131098-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m04:/home/docker/cp-test.txt ha-131098:/home/docker/cp-test_ha-131098-m04_ha-131098.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098 "sudo cat /home/docker/cp-test_ha-131098-m04_ha-131098.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m04:/home/docker/cp-test.txt ha-131098-m02:/home/docker/cp-test_ha-131098-m04_ha-131098-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m02 "sudo cat /home/docker/cp-test_ha-131098-m04_ha-131098-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 cp ha-131098-m04:/home/docker/cp-test.txt ha-131098-m03:/home/docker/cp-test_ha-131098-m04_ha-131098-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 ssh -n ha-131098-m03 "sudo cat /home/docker/cp-test_ha-131098-m04_ha-131098-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 node stop m02 --alsologtostderr -v 5
E1001 18:58:44.764161  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 node stop m02 --alsologtostderr -v 5: (12.05830356s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5: exit status 7 (753.502186ms)

                                                
                                                
-- stdout --
	ha-131098
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-131098-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-131098-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-131098-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 18:58:55.633535  337563 out.go:360] Setting OutFile to fd 1 ...
	I1001 18:58:55.633679  337563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:58:55.633687  337563 out.go:374] Setting ErrFile to fd 2...
	I1001 18:58:55.633692  337563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 18:58:55.633967  337563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
	I1001 18:58:55.634199  337563 out.go:368] Setting JSON to false
	I1001 18:58:55.634260  337563 mustload.go:65] Loading cluster: ha-131098
	I1001 18:58:55.634338  337563 notify.go:220] Checking for updates...
	I1001 18:58:55.635666  337563 config.go:182] Loaded profile config "ha-131098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 18:58:55.635693  337563 status.go:174] checking status of ha-131098 ...
	I1001 18:58:55.636508  337563 cli_runner.go:164] Run: docker container inspect ha-131098 --format={{.State.Status}}
	I1001 18:58:55.656466  337563 status.go:371] ha-131098 host status = "Running" (err=<nil>)
	I1001 18:58:55.656492  337563 host.go:66] Checking if "ha-131098" exists ...
	I1001 18:58:55.656807  337563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-131098
	I1001 18:58:55.684953  337563 host.go:66] Checking if "ha-131098" exists ...
	I1001 18:58:55.685261  337563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 18:58:55.685304  337563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-131098
	I1001 18:58:55.710207  337563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33156 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/ha-131098/id_rsa Username:docker}
	I1001 18:58:55.812225  337563 ssh_runner.go:195] Run: systemctl --version
	I1001 18:58:55.816904  337563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:58:55.828506  337563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 18:58:55.887446  337563 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-01 18:58:55.87727643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 18:58:55.888004  337563 kubeconfig.go:125] found "ha-131098" server: "https://192.168.49.254:8443"
	I1001 18:58:55.888035  337563 api_server.go:166] Checking apiserver status ...
	I1001 18:58:55.888076  337563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:58:55.899239  337563 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup
	I1001 18:58:55.908559  337563 api_server.go:182] apiserver freezer: "8:freezer:/docker/e0ad6e6e5972017dc9353041b67f471fc42194335696783eef244b5b9aa6eb6f/crio/crio-1151df374c3e3fbf44029e7e1ebc60e0e067bad989e70da407fbe22cc51f94f9"
	I1001 18:58:55.908635  337563 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e0ad6e6e5972017dc9353041b67f471fc42194335696783eef244b5b9aa6eb6f/crio/crio-1151df374c3e3fbf44029e7e1ebc60e0e067bad989e70da407fbe22cc51f94f9/freezer.state
	I1001 18:58:55.917226  337563 api_server.go:204] freezer state: "THAWED"
	I1001 18:58:55.917254  337563 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1001 18:58:55.925547  337563 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1001 18:58:55.925573  337563 status.go:463] ha-131098 apiserver status = Running (err=<nil>)
	I1001 18:58:55.925584  337563 status.go:176] ha-131098 status: &{Name:ha-131098 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 18:58:55.925600  337563 status.go:174] checking status of ha-131098-m02 ...
	I1001 18:58:55.925893  337563 cli_runner.go:164] Run: docker container inspect ha-131098-m02 --format={{.State.Status}}
	I1001 18:58:55.942032  337563 status.go:371] ha-131098-m02 host status = "Stopped" (err=<nil>)
	I1001 18:58:55.942052  337563 status.go:384] host is not running, skipping remaining checks
	I1001 18:58:55.942058  337563 status.go:176] ha-131098-m02 status: &{Name:ha-131098-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 18:58:55.942079  337563 status.go:174] checking status of ha-131098-m03 ...
	I1001 18:58:55.942375  337563 cli_runner.go:164] Run: docker container inspect ha-131098-m03 --format={{.State.Status}}
	I1001 18:58:55.959051  337563 status.go:371] ha-131098-m03 host status = "Running" (err=<nil>)
	I1001 18:58:55.959074  337563 host.go:66] Checking if "ha-131098-m03" exists ...
	I1001 18:58:55.959359  337563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-131098-m03
	I1001 18:58:55.975987  337563 host.go:66] Checking if "ha-131098-m03" exists ...
	I1001 18:58:55.976293  337563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 18:58:55.976335  337563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-131098-m03
	I1001 18:58:55.999300  337563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/ha-131098-m03/id_rsa Username:docker}
	I1001 18:58:56.100826  337563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:58:56.113155  337563 kubeconfig.go:125] found "ha-131098" server: "https://192.168.49.254:8443"
	I1001 18:58:56.113184  337563 api_server.go:166] Checking apiserver status ...
	I1001 18:58:56.113227  337563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 18:58:56.124379  337563 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1371/cgroup
	I1001 18:58:56.133939  337563 api_server.go:182] apiserver freezer: "8:freezer:/docker/a069763240f14d3e2204ee1dfa0da78ce753152c4a913b6d3b1d67f942e944bd/crio/crio-f49077daaa849ae7d19d0555cb588fb5c8340251957fed3cc6d799910e4686c8"
	I1001 18:58:56.134014  337563 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a069763240f14d3e2204ee1dfa0da78ce753152c4a913b6d3b1d67f942e944bd/crio/crio-f49077daaa849ae7d19d0555cb588fb5c8340251957fed3cc6d799910e4686c8/freezer.state
	I1001 18:58:56.143914  337563 api_server.go:204] freezer state: "THAWED"
	I1001 18:58:56.143940  337563 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1001 18:58:56.152092  337563 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1001 18:58:56.152121  337563 status.go:463] ha-131098-m03 apiserver status = Running (err=<nil>)
	I1001 18:58:56.152131  337563 status.go:176] ha-131098-m03 status: &{Name:ha-131098-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 18:58:56.152148  337563 status.go:174] checking status of ha-131098-m04 ...
	I1001 18:58:56.152471  337563 cli_runner.go:164] Run: docker container inspect ha-131098-m04 --format={{.State.Status}}
	I1001 18:58:56.169303  337563 status.go:371] ha-131098-m04 host status = "Running" (err=<nil>)
	I1001 18:58:56.169332  337563 host.go:66] Checking if "ha-131098-m04" exists ...
	I1001 18:58:56.169744  337563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-131098-m04
	I1001 18:58:56.191884  337563 host.go:66] Checking if "ha-131098-m04" exists ...
	I1001 18:58:56.192241  337563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 18:58:56.192289  337563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-131098-m04
	I1001 18:58:56.212863  337563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/ha-131098-m04/id_rsa Username:docker}
	I1001 18:58:56.312097  337563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 18:58:56.323921  337563 status.go:176] ha-131098-m04 status: &{Name:ha-131098-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 node start m02 --alsologtostderr -v 5
E1001 18:59:05.245697  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 node start m02 --alsologtostderr -v 5: (30.746150194s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5: (1.265838536s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.282965803s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 stop --alsologtostderr -v 5
E1001 18:59:46.207276  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 stop --alsologtostderr -v 5: (27.175711972s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 start --wait true --alsologtostderr -v 5
E1001 19:00:29.270221  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:01:08.129395  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 start --wait true --alsologtostderr -v 5: (1m33.495752871s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 node delete m03 --alsologtostderr -v 5: (11.504226077s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 stop --alsologtostderr -v 5: (35.563449297s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5: exit status 7 (118.867858ms)

                                                
                                                
-- stdout --
	ha-131098
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-131098-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-131098-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 19:02:20.289175  351481 out.go:360] Setting OutFile to fd 1 ...
	I1001 19:02:20.289431  351481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 19:02:20.289459  351481 out.go:374] Setting ErrFile to fd 2...
	I1001 19:02:20.289477  351481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 19:02:20.289767  351481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
	I1001 19:02:20.290016  351481 out.go:368] Setting JSON to false
	I1001 19:02:20.290081  351481 mustload.go:65] Loading cluster: ha-131098
	I1001 19:02:20.290122  351481 notify.go:220] Checking for updates...
	I1001 19:02:20.290570  351481 config.go:182] Loaded profile config "ha-131098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 19:02:20.290609  351481 status.go:174] checking status of ha-131098 ...
	I1001 19:02:20.291458  351481 cli_runner.go:164] Run: docker container inspect ha-131098 --format={{.State.Status}}
	I1001 19:02:20.309212  351481 status.go:371] ha-131098 host status = "Stopped" (err=<nil>)
	I1001 19:02:20.309234  351481 status.go:384] host is not running, skipping remaining checks
	I1001 19:02:20.309240  351481 status.go:176] ha-131098 status: &{Name:ha-131098 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 19:02:20.309264  351481 status.go:174] checking status of ha-131098-m02 ...
	I1001 19:02:20.309559  351481 cli_runner.go:164] Run: docker container inspect ha-131098-m02 --format={{.State.Status}}
	I1001 19:02:20.339038  351481 status.go:371] ha-131098-m02 host status = "Stopped" (err=<nil>)
	I1001 19:02:20.339061  351481 status.go:384] host is not running, skipping remaining checks
	I1001 19:02:20.339070  351481 status.go:176] ha-131098-m02 status: &{Name:ha-131098-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 19:02:20.339092  351481 status.go:174] checking status of ha-131098-m04 ...
	I1001 19:02:20.339405  351481 cli_runner.go:164] Run: docker container inspect ha-131098-m04 --format={{.State.Status}}
	I1001 19:02:20.357243  351481 status.go:371] ha-131098-m04 host status = "Stopped" (err=<nil>)
	I1001 19:02:20.357264  351481 status.go:384] host is not running, skipping remaining checks
	I1001 19:02:20.357270  351481 status.go:176] ha-131098-m04 status: &{Name:ha-131098-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1001 19:03:24.270894  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m6.244630871s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (89.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 node add --control-plane --alsologtostderr -v 5
E1001 19:03:51.971931  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 node add --control-plane --alsologtostderr -v 5: (1m28.046204733s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-131098 status --alsologtostderr -v 5: (1.027018053s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (89.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.03746106s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-268656 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1001 19:05:29.267991  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-268656 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m18.939251831s)
--- PASS: TestJSONOutput/start/Command (78.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-268656 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-268656 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-268656 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-268656 --output=json --user=testUser: (5.873592063s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-248276 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-248276 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (97.696051ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"29395dc1-85c5-4ddf-a6bc-a32e9d7d75cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-248276] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1baf90a-dfa1-4585-a55d-7aa9bd4f7d0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21631"}}
	{"specversion":"1.0","id":"fc7bdaec-eb93-4113-b453-52fbda4add12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1733fb49-b95e-4215-904e-044d7450e447","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig"}}
	{"specversion":"1.0","id":"bd9f560c-d586-47f4-a133-3a2f018e4bc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube"}}
	{"specversion":"1.0","id":"a5bc3b61-047e-43cf-8be5-947ce8563836","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d3533d48-be21-422b-bdb8-20aca5d6ebd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"989e0868-b3c7-4507-bdf9-d0db380ce13e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-248276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-248276
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (67.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-275471 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-275471 --network=: (1m4.92254301s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-275471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-275471
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-275471: (2.14075315s)
--- PASS: TestKicCustomNetwork/create_custom_network (67.09s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-168459 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-168459 --network=bridge: (34.146073796s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-168459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-168459
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-168459: (2.016961443s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.18s)

                                                
                                    
x
+
TestKicExistingNetwork (37.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1001 19:08:21.370427  290016 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1001 19:08:21.385880  290016 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1001 19:08:21.385965  290016 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1001 19:08:21.385982  290016 cli_runner.go:164] Run: docker network inspect existing-network
W1001 19:08:21.401410  290016 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1001 19:08:21.401444  290016 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1001 19:08:21.401467  290016 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1001 19:08:21.401574  290016 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1001 19:08:21.418681  290016 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9656bfb4cfbc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:5e:a1:17:4a:93} reservation:<nil>}
I1001 19:08:21.419026  290016 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018174c0}
I1001 19:08:21.419050  290016 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1001 19:08:21.419100  290016 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1001 19:08:21.487137  290016 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-163462 --network=existing-network
E1001 19:08:24.271437  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-163462 --network=existing-network: (35.183250382s)
helpers_test.go:175: Cleaning up "existing-network-163462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-163462
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-163462: (1.978732111s)
I1001 19:08:58.665198  290016 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.31s)

                                                
                                    
x
+
TestKicCustomSubnet (32.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-266836 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-266836 --subnet=192.168.60.0/24: (30.124094071s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-266836 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-266836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-266836
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-266836: (2.086758569s)
--- PASS: TestKicCustomSubnet (32.24s)

                                                
                                    
x
+
TestKicStaticIP (36.07s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-013193 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-013193 --static-ip=192.168.200.200: (33.803438586s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-013193 ip
helpers_test.go:175: Cleaning up "static-ip-013193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-013193
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-013193: (2.097461512s)
--- PASS: TestKicStaticIP (36.07s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (73.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-726157 --driver=docker  --container-runtime=crio
E1001 19:10:29.270959  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-726157 --driver=docker  --container-runtime=crio: (32.33966282s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-728815 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-728815 --driver=docker  --container-runtime=crio: (36.15797481s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-726157
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-728815
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-728815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-728815
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-728815: (1.996971084s)
helpers_test.go:175: Cleaning up "first-726157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-726157
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-726157: (1.913082727s)
--- PASS: TestMinikubeProfile (73.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-755514 --memory=3072 --mount-string /tmp/TestMountStartserial2632347150/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-755514 --memory=3072 --mount-string /tmp/TestMountStartserial2632347150/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.205523775s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-755514 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-757368 --memory=3072 --mount-string /tmp/TestMountStartserial2632347150/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-757368 --memory=3072 --mount-string /tmp/TestMountStartserial2632347150/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.764985401s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-757368 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-755514 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-755514 --alsologtostderr -v=5: (1.637186663s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-757368 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-757368
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-757368: (1.213275962s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-757368
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-757368: (6.939957538s)
--- PASS: TestMountStart/serial/RestartStopped (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-757368 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (132.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-088169 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1001 19:13:24.270032  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:13:32.345070  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-088169 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m12.344262542s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (132.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-088169 -- rollout status deployment/busybox: (5.762466639s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- exec busybox-7b57f96db7-22fzx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- exec busybox-7b57f96db7-t2wcp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- exec busybox-7b57f96db7-22fzx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- exec busybox-7b57f96db7-t2wcp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- exec busybox-7b57f96db7-22fzx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- exec busybox-7b57f96db7-t2wcp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- exec busybox-7b57f96db7-22fzx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- exec busybox-7b57f96db7-22fzx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- exec busybox-7b57f96db7-t2wcp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-088169 -- exec busybox-7b57f96db7-t2wcp -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-088169 -v=5 --alsologtostderr
E1001 19:14:47.333425  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-088169 -v=5 --alsologtostderr: (55.211992927s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.88s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-088169 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp testdata/cp-test.txt multinode-088169:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp multinode-088169:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile211527356/001/cp-test_multinode-088169.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp multinode-088169:/home/docker/cp-test.txt multinode-088169-m02:/home/docker/cp-test_multinode-088169_multinode-088169-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m02 "sudo cat /home/docker/cp-test_multinode-088169_multinode-088169-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp multinode-088169:/home/docker/cp-test.txt multinode-088169-m03:/home/docker/cp-test_multinode-088169_multinode-088169-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m03 "sudo cat /home/docker/cp-test_multinode-088169_multinode-088169-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp testdata/cp-test.txt multinode-088169-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp multinode-088169-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile211527356/001/cp-test_multinode-088169-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp multinode-088169-m02:/home/docker/cp-test.txt multinode-088169:/home/docker/cp-test_multinode-088169-m02_multinode-088169.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169 "sudo cat /home/docker/cp-test_multinode-088169-m02_multinode-088169.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp multinode-088169-m02:/home/docker/cp-test.txt multinode-088169-m03:/home/docker/cp-test_multinode-088169-m02_multinode-088169-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m03 "sudo cat /home/docker/cp-test_multinode-088169-m02_multinode-088169-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp testdata/cp-test.txt multinode-088169-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp multinode-088169-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile211527356/001/cp-test_multinode-088169-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp multinode-088169-m03:/home/docker/cp-test.txt multinode-088169:/home/docker/cp-test_multinode-088169-m03_multinode-088169.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169 "sudo cat /home/docker/cp-test_multinode-088169-m03_multinode-088169.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 cp multinode-088169-m03:/home/docker/cp-test.txt multinode-088169-m02:/home/docker/cp-test_multinode-088169-m03_multinode-088169-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 ssh -n multinode-088169-m02 "sudo cat /home/docker/cp-test_multinode-088169-m03_multinode-088169-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-088169 node stop m03: (1.221616154s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-088169 status: exit status 7 (554.977004ms)

                                                
                                                
-- stdout --
	multinode-088169
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-088169-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-088169-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-088169 status --alsologtostderr: exit status 7 (537.996305ms)

                                                
                                                
-- stdout --
	multinode-088169
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-088169-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-088169-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 19:15:18.821105  404739 out.go:360] Setting OutFile to fd 1 ...
	I1001 19:15:18.821222  404739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 19:15:18.821233  404739 out.go:374] Setting ErrFile to fd 2...
	I1001 19:15:18.821238  404739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 19:15:18.821674  404739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
	I1001 19:15:18.821927  404739 out.go:368] Setting JSON to false
	I1001 19:15:18.821992  404739 mustload.go:65] Loading cluster: multinode-088169
	I1001 19:15:18.822120  404739 notify.go:220] Checking for updates...
	I1001 19:15:18.822538  404739 config.go:182] Loaded profile config "multinode-088169": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 19:15:18.822551  404739 status.go:174] checking status of multinode-088169 ...
	I1001 19:15:18.823245  404739 cli_runner.go:164] Run: docker container inspect multinode-088169 --format={{.State.Status}}
	I1001 19:15:18.843487  404739 status.go:371] multinode-088169 host status = "Running" (err=<nil>)
	I1001 19:15:18.843514  404739 host.go:66] Checking if "multinode-088169" exists ...
	I1001 19:15:18.843827  404739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-088169
	I1001 19:15:18.870156  404739 host.go:66] Checking if "multinode-088169" exists ...
	I1001 19:15:18.870473  404739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 19:15:18.870526  404739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-088169
	I1001 19:15:18.889063  404739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/multinode-088169/id_rsa Username:docker}
	I1001 19:15:18.984396  404739 ssh_runner.go:195] Run: systemctl --version
	I1001 19:15:18.988808  404739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:15:19.008207  404739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 19:15:19.075354  404739 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-01 19:15:19.064127924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 19:15:19.076119  404739 kubeconfig.go:125] found "multinode-088169" server: "https://192.168.67.2:8443"
	I1001 19:15:19.076171  404739 api_server.go:166] Checking apiserver status ...
	I1001 19:15:19.076225  404739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 19:15:19.088217  404739 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1423/cgroup
	I1001 19:15:19.098837  404739 api_server.go:182] apiserver freezer: "8:freezer:/docker/07737f13bb92d0aee61c3ebbc5e52b8d99af7d9b822776208eeecd5eb8e9a941/crio/crio-ff707dee942ade8c247f0847e32b2fdd0d667cc7851609e295d4f7e5331b5147"
	I1001 19:15:19.098912  404739 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/07737f13bb92d0aee61c3ebbc5e52b8d99af7d9b822776208eeecd5eb8e9a941/crio/crio-ff707dee942ade8c247f0847e32b2fdd0d667cc7851609e295d4f7e5331b5147/freezer.state
	I1001 19:15:19.108063  404739 api_server.go:204] freezer state: "THAWED"
	I1001 19:15:19.108092  404739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1001 19:15:19.116615  404739 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1001 19:15:19.116650  404739 status.go:463] multinode-088169 apiserver status = Running (err=<nil>)
	I1001 19:15:19.116664  404739 status.go:176] multinode-088169 status: &{Name:multinode-088169 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 19:15:19.116681  404739 status.go:174] checking status of multinode-088169-m02 ...
	I1001 19:15:19.117002  404739 cli_runner.go:164] Run: docker container inspect multinode-088169-m02 --format={{.State.Status}}
	I1001 19:15:19.135681  404739 status.go:371] multinode-088169-m02 host status = "Running" (err=<nil>)
	I1001 19:15:19.135711  404739 host.go:66] Checking if "multinode-088169-m02" exists ...
	I1001 19:15:19.136073  404739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-088169-m02
	I1001 19:15:19.152786  404739 host.go:66] Checking if "multinode-088169-m02" exists ...
	I1001 19:15:19.153091  404739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 19:15:19.153133  404739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-088169-m02
	I1001 19:15:19.174905  404739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33281 SSHKeyPath:/home/jenkins/minikube-integration/21631-288146/.minikube/machines/multinode-088169-m02/id_rsa Username:docker}
	I1001 19:15:19.271876  404739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:15:19.283506  404739 status.go:176] multinode-088169-m02 status: &{Name:multinode-088169-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1001 19:15:19.283542  404739 status.go:174] checking status of multinode-088169-m03 ...
	I1001 19:15:19.283861  404739 cli_runner.go:164] Run: docker container inspect multinode-088169-m03 --format={{.State.Status}}
	I1001 19:15:19.300842  404739 status.go:371] multinode-088169-m03 host status = "Stopped" (err=<nil>)
	I1001 19:15:19.300868  404739 status.go:384] host is not running, skipping remaining checks
	I1001 19:15:19.300875  404739 status.go:176] multinode-088169-m03 status: &{Name:multinode-088169-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-088169 node start m03 -v=5 --alsologtostderr: (7.176351075s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-088169
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-088169
E1001 19:15:29.268130  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-088169: (24.772531584s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-088169 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-088169 --wait=true -v=5 --alsologtostderr: (48.042562621s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-088169
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-088169 node delete m03: (4.903687436s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-088169 stop: (23.665597756s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-088169 status: exit status 7 (97.290101ms)

                                                
                                                
-- stdout --
	multinode-088169
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-088169-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-088169 status --alsologtostderr: exit status 7 (94.485752ms)

                                                
                                                
-- stdout --
	multinode-088169
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-088169-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 19:17:09.728654  412691 out.go:360] Setting OutFile to fd 1 ...
	I1001 19:17:09.728866  412691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 19:17:09.728908  412691 out.go:374] Setting ErrFile to fd 2...
	I1001 19:17:09.728929  412691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 19:17:09.729244  412691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
	I1001 19:17:09.729498  412691 out.go:368] Setting JSON to false
	I1001 19:17:09.729585  412691 mustload.go:65] Loading cluster: multinode-088169
	I1001 19:17:09.729667  412691 notify.go:220] Checking for updates...
	I1001 19:17:09.730897  412691 config.go:182] Loaded profile config "multinode-088169": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 19:17:09.730945  412691 status.go:174] checking status of multinode-088169 ...
	I1001 19:17:09.731650  412691 cli_runner.go:164] Run: docker container inspect multinode-088169 --format={{.State.Status}}
	I1001 19:17:09.750526  412691 status.go:371] multinode-088169 host status = "Stopped" (err=<nil>)
	I1001 19:17:09.750545  412691 status.go:384] host is not running, skipping remaining checks
	I1001 19:17:09.750552  412691 status.go:176] multinode-088169 status: &{Name:multinode-088169 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 19:17:09.750583  412691 status.go:174] checking status of multinode-088169-m02 ...
	I1001 19:17:09.750969  412691 cli_runner.go:164] Run: docker container inspect multinode-088169-m02 --format={{.State.Status}}
	I1001 19:17:09.772748  412691 status.go:371] multinode-088169-m02 host status = "Stopped" (err=<nil>)
	I1001 19:17:09.772769  412691 status.go:384] host is not running, skipping remaining checks
	I1001 19:17:09.772780  412691 status.go:176] multinode-088169-m02 status: &{Name:multinode-088169-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-088169 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-088169 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (53.00225257s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-088169 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-088169
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-088169-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-088169-m02 --driver=docker  --container-runtime=crio: exit status 14 (93.963399ms)

                                                
                                                
-- stdout --
	* [multinode-088169-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-088169-m02' is duplicated with machine name 'multinode-088169-m02' in profile 'multinode-088169'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-088169-m03 --driver=docker  --container-runtime=crio
E1001 19:18:24.274020  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-088169-m03 --driver=docker  --container-runtime=crio: (29.225555832s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-088169
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-088169: exit status 80 (336.178753ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-088169 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-088169-m03 already exists in multinode-088169-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-088169-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-088169-m03: (1.966995482s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.68s)

                                                
                                    
x
+
TestPreload (128.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-182001 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-182001 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m1.739628491s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-182001 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-182001 image pull gcr.io/k8s-minikube/busybox: (3.651775025s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-182001
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-182001: (5.820338101s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-182001 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1001 19:20:29.267370  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-182001 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.836176141s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-182001 image list
helpers_test.go:175: Cleaning up "test-preload-182001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-182001
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-182001: (2.339217347s)
--- PASS: TestPreload (128.68s)

                                                
                                    
x
+
TestScheduledStopUnix (105.81s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-659268 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-659268 --memory=3072 --driver=docker  --container-runtime=crio: (29.650261053s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-659268 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-659268 -n scheduled-stop-659268
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-659268 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1001 19:21:18.080917  290016 retry.go:31] will retry after 132.264µs: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.082144  290016 retry.go:31] will retry after 209.602µs: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.083263  290016 retry.go:31] will retry after 118.335µs: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.084402  290016 retry.go:31] will retry after 379.136µs: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.085482  290016 retry.go:31] will retry after 613.644µs: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.086601  290016 retry.go:31] will retry after 784.847µs: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.087796  290016 retry.go:31] will retry after 691.713µs: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.088949  290016 retry.go:31] will retry after 2.201817ms: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.093130  290016 retry.go:31] will retry after 3.504507ms: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.097391  290016 retry.go:31] will retry after 3.299636ms: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.102964  290016 retry.go:31] will retry after 7.521652ms: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.111565  290016 retry.go:31] will retry after 8.478581ms: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.120786  290016 retry.go:31] will retry after 14.528382ms: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.136016  290016 retry.go:31] will retry after 12.08826ms: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.150233  290016 retry.go:31] will retry after 22.315592ms: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
I1001 19:21:18.173491  290016 retry.go:31] will retry after 55.186047ms: open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/scheduled-stop-659268/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-659268 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-659268 -n scheduled-stop-659268
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-659268
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-659268 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-659268
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-659268: exit status 7 (72.762087ms)

                                                
                                                
-- stdout --
	scheduled-stop-659268
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-659268 -n scheduled-stop-659268
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-659268 -n scheduled-stop-659268: exit status 7 (68.00033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-659268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-659268
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-659268: (4.510249232s)
--- PASS: TestScheduledStopUnix (105.81s)

                                                
                                    
x
+
TestInsufficientStorage (10.5s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-908376 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-908376 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.010075579s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"492b3ddc-2cee-412e-bd3d-34821882c556","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-908376] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"16ef89e5-32ba-4a41-8c33-23283272b06a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21631"}}
	{"specversion":"1.0","id":"b51e2ecf-dd0c-4534-8f55-625905d6b14b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8427488d-8559-4ad9-9c41-f2e23ce2e3a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig"}}
	{"specversion":"1.0","id":"748c3859-ae8a-45d8-bc67-0249d2c54022","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube"}}
	{"specversion":"1.0","id":"0b541e5c-c1a9-4c6b-b8a3-aa8d64eaf4ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c86e12de-89b8-4138-8078-c93daf2d32d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d3abfafe-1423-4ba3-928f-76e247e045da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"80518f98-542b-4fda-8721-1f4fd1b242f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"63e512ea-b3de-4748-ba45-0fb833bbc17f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"803c8c74-c9e9-4249-b7df-ad975a9baf31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fc14cfcf-266d-4835-b8b2-cf1bb5117762","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-908376\" primary control-plane node in \"insufficient-storage-908376\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba9eb260-e0f3-49b3-9130-aa43e8ef74a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"035ea607-614c-4f22-b7b5-8a83fa245c7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"58e0c62a-34f7-4f7d-a511-a392d50c124a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-908376 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-908376 --output=json --layout=cluster: exit status 7 (288.662205ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-908376","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-908376","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 19:22:41.986245  430048 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-908376" does not appear in /home/jenkins/minikube-integration/21631-288146/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-908376 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-908376 --output=json --layout=cluster: exit status 7 (321.661951ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-908376","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-908376","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 19:22:42.306865  430112 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-908376" does not appear in /home/jenkins/minikube-integration/21631-288146/kubeconfig
	E1001 19:22:42.318196  430112 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/insufficient-storage-908376/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-908376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-908376
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-908376: (1.88029471s)
--- PASS: TestInsufficientStorage (10.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (58.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3897228820 start -p running-upgrade-995861 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3897228820 start -p running-upgrade-995861 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.69224704s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-995861 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-995861 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.150178599s)
helpers_test.go:175: Cleaning up "running-upgrade-995861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-995861
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-995861: (2.085730642s)
--- PASS: TestRunningBinaryUpgrade (58.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (355.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-537001 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-537001 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.582206702s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-537001
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-537001: (1.255104722s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-537001 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-537001 status --format={{.Host}}: exit status 7 (72.048757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-537001 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-537001 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.707526651s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-537001 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-537001 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-537001 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (132.936434ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-537001] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-537001
	    minikube start -p kubernetes-upgrade-537001 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5370012 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-537001 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-537001 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-537001 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.077350922s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-537001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-537001
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-537001: (2.36057008s)
--- PASS: TestKubernetesUpgrade (355.32s)

                                                
                                    
x
+
TestMissingContainerUpgrade (136.76s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4210458943 start -p missing-upgrade-493289 --memory=3072 --driver=docker  --container-runtime=crio
E1001 19:23:24.269969  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4210458943 start -p missing-upgrade-493289 --memory=3072 --driver=docker  --container-runtime=crio: (1m11.419814086s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-493289
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-493289
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-493289 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-493289 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.568061296s)
helpers_test.go:175: Cleaning up "missing-upgrade-493289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-493289
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-493289: (2.787145205s)
--- PASS: TestMissingContainerUpgrade (136.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-529655 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-529655 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (93.931026ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-529655] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-529655 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-529655 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.789719389s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-529655 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (35.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-529655 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-529655 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.188484707s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-529655 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-529655 status -o json: exit status 2 (373.058087ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-529655","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-529655
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-529655: (2.702877921s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (35.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-529655 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-529655 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.894080841s)
--- PASS: TestNoKubernetes/serial/Start (7.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-529655 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-529655 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.239678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-529655
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-529655: (1.201108772s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-529655 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-529655 --driver=docker  --container-runtime=crio: (7.568223737s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-529655 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-529655 "sudo systemctl is-active --quiet service kubelet": exit status 1 (253.169888ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (8.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (8.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (62.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1039418750 start -p stopped-upgrade-205401 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1001 19:25:29.267315  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1039418750 start -p stopped-upgrade-205401 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.168458462s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1039418750 -p stopped-upgrade-205401 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1039418750 -p stopped-upgrade-205401 stop: (1.232299751s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-205401 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-205401 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.299059117s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (62.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-205401
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-205401: (1.149560808s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/Start (82.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-584235 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1001 19:28:24.270860  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-584235 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.517627494s)
--- PASS: TestPause/serial/Start (82.52s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.33s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-584235 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-584235 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.304733651s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.33s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-584235 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-584235 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-584235 --output=json --layout=cluster: exit status 2 (399.653865ms)

                                                
                                                
-- stdout --
	{"Name":"pause-584235","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-584235","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-584235 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-584235 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-584235 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-584235 --alsologtostderr -v=5: (2.65263779s)
--- PASS: TestPause/serial/DeletePaused (2.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-584235
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-584235: exit status 1 (20.18637ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-584235: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-096987 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-096987 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (181.229621ms)

                                                
                                                
-- stdout --
	* [false-096987] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21631
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 19:29:52.497331  468728 out.go:360] Setting OutFile to fd 1 ...
	I1001 19:29:52.497458  468728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 19:29:52.497494  468728 out.go:374] Setting ErrFile to fd 2...
	I1001 19:29:52.497506  468728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1001 19:29:52.497853  468728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21631-288146/.minikube/bin
	I1001 19:29:52.498270  468728 out.go:368] Setting JSON to false
	I1001 19:29:52.499241  468728 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7945,"bootTime":1759339048,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1001 19:29:52.499308  468728 start.go:140] virtualization:  
	I1001 19:29:52.502847  468728 out.go:179] * [false-096987] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1001 19:29:52.506628  468728 out.go:179]   - MINIKUBE_LOCATION=21631
	I1001 19:29:52.506754  468728 notify.go:220] Checking for updates...
	I1001 19:29:52.512446  468728 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:29:52.515449  468728 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21631-288146/kubeconfig
	I1001 19:29:52.518370  468728 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21631-288146/.minikube
	I1001 19:29:52.521256  468728 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 19:29:52.524185  468728 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:29:52.527597  468728 config.go:182] Loaded profile config "kubernetes-upgrade-537001": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1001 19:29:52.527749  468728 driver.go:421] Setting default libvirt URI to qemu:///system
	I1001 19:29:52.554192  468728 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1001 19:29:52.554321  468728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 19:29:52.612798  468728 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-01 19:29:52.603347957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1001 19:29:52.612911  468728 docker.go:318] overlay module found
	I1001 19:29:52.615976  468728 out.go:179] * Using the docker driver based on user configuration
	I1001 19:29:52.618840  468728 start.go:304] selected driver: docker
	I1001 19:29:52.618857  468728 start.go:921] validating driver "docker" against <nil>
	I1001 19:29:52.618870  468728 start.go:932] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:29:52.622511  468728 out.go:203] 
	W1001 19:29:52.626024  468728 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1001 19:29:52.628871  468728 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-096987 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-096987" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-096987" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21631-288146/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 01 Oct 2025 19:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-537001
contexts:
- context:
cluster: kubernetes-upgrade-537001
extensions:
- extension:
last-update: Wed, 01 Oct 2025 19:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-537001
name: kubernetes-upgrade-537001
current-context: kubernetes-upgrade-537001
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-537001
user:
client-certificate: /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/kubernetes-upgrade-537001/client.crt
client-key: /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/kubernetes-upgrade-537001/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-096987

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-096987"

                                                
                                                
----------------------- debugLogs end: false-096987 [took: 3.357757754s] --------------------------------
helpers_test.go:175: Cleaning up "false-096987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-096987
--- PASS: TestNetworkPlugins/group/false (3.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-875226 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1001 19:31:27.334807  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-875226 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m0.251600025s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-875226 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8c57ff70-8ae8-4084-b601-03a01fd107b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8c57ff70-8ae8-4084-b601-03a01fd107b4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.003450604s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-875226 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-875226 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-875226 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.106808281s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-875226 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-875226 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-875226 --alsologtostderr -v=3: (11.9357606s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-875226 -n old-k8s-version-875226
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-875226 -n old-k8s-version-875226: exit status 7 (75.139169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-875226 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-875226 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1001 19:33:24.270261  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-875226 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (55.580813949s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-875226 -n old-k8s-version-875226
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qr8ds" [578be728-d268-4866-87ac-8d11740bec9f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003332655s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qr8ds" [578be728-d268-4866-87ac-8d11740bec9f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00408989s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-875226 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-875226 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-875226 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-875226 -n old-k8s-version-875226
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-875226 -n old-k8s-version-875226: exit status 2 (314.577327ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-875226 -n old-k8s-version-875226
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-875226 -n old-k8s-version-875226: exit status 2 (317.429732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-875226 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-875226 -n old-k8s-version-875226
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-875226 -n old-k8s-version-875226
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-821041 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-821041 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.173089592s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-303125 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-303125 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m20.66734223s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-821041 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c311a626-c930-4a47-a138-3007522d93dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1001 19:35:29.267328  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [c311a626-c930-4a47-a138-3007522d93dc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003738017s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-821041 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-821041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-821041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006901017s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-821041 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-821041 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-821041 --alsologtostderr -v=3: (11.953146144s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-821041 -n default-k8s-diff-port-821041
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-821041 -n default-k8s-diff-port-821041: exit status 7 (74.840547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-821041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-821041 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-821041 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.696581402s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-821041 -n default-k8s-diff-port-821041
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-303125 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f9056ea4-a91c-4be6-a92d-7428412d518e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f9056ea4-a91c-4be6-a92d-7428412d518e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003661818s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-303125 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-303125 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-303125 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.317777907s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-303125 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-303125 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-303125 --alsologtostderr -v=3: (12.623778367s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-303125 -n embed-certs-303125
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-303125 -n embed-certs-303125: exit status 7 (71.114138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-303125 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-303125 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-303125 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.41290602s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-303125 -n embed-certs-303125
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8gfk6" [2363ac9e-a5b2-4a8a-a337-eb33c8a24597] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003125755s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8gfk6" [2363ac9e-a5b2-4a8a-a337-eb33c8a24597] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003567629s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-821041 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-821041 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-821041 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-821041 -n default-k8s-diff-port-821041
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-821041 -n default-k8s-diff-port-821041: exit status 2 (341.371181ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-821041 -n default-k8s-diff-port-821041
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-821041 -n default-k8s-diff-port-821041: exit status 2 (322.600111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-821041 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-821041 -n default-k8s-diff-port-821041
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-821041 -n default-k8s-diff-port-821041
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-544423 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-544423 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m10.39931092s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-52z8q" [00609167-3c5e-48eb-813c-eae4b45ffce9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005417588s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-52z8q" [00609167-3c5e-48eb-813c-eae4b45ffce9] Running
E1001 19:37:24.757228  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:37:24.763549  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:37:24.774898  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:37:24.796250  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:37:24.837772  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:37:24.919297  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:37:25.080646  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:37:25.402630  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:37:26.044005  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:37:27.325476  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004619709s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-303125 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-303125 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-303125 --alsologtostderr -v=1
E1001 19:37:29.887622  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-303125 --alsologtostderr -v=1: (1.143077702s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-303125 -n embed-certs-303125
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-303125 -n embed-certs-303125: exit status 2 (430.69459ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-303125 -n embed-certs-303125
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-303125 -n embed-certs-303125: exit status 2 (422.959347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-303125 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-303125 -n embed-certs-303125
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-303125 -n embed-certs-303125
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-748939 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1001 19:37:45.251587  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:38:05.733919  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-748939 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.197690768s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-544423 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e3b04bdb-3e4d-4558-b984-bf6d6cd64139] Pending
helpers_test.go:352: "busybox" [e3b04bdb-3e4d-4558-b984-bf6d6cd64139] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e3b04bdb-3e4d-4558-b984-bf6d6cd64139] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004262214s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-544423 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-748939 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-748939 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-748939 --alsologtostderr -v=3: (1.388204609s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-748939 -n newest-cni-748939
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-748939 -n newest-cni-748939: exit status 7 (70.204408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-748939 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-748939 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-748939 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (14.630697848s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-748939 -n newest-cni-748939
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-544423 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-544423 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-544423 --alsologtostderr -v=3
E1001 19:38:24.270758  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/functional-246462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-544423 --alsologtostderr -v=3: (12.129034147s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-748939 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-544423 -n no-preload-544423
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-544423 -n no-preload-544423: exit status 7 (124.763646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-544423 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-748939 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-748939 --alsologtostderr -v=1: (1.11261147s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-748939 -n newest-cni-748939
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-748939 -n newest-cni-748939: exit status 2 (429.154921ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-748939 -n newest-cni-748939
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-748939 -n newest-cni-748939: exit status 2 (393.862858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-748939 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-748939 -n newest-cni-748939
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-748939 -n newest-cni-748939
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-544423 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-544423 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.65813838s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-544423 -n no-preload-544423
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1001 19:38:46.695997  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.464753339s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qrpj7" [75ed73df-2c08-45d9-9c74-749d85f19103] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003422726s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qrpj7" [75ed73df-2c08-45d9-9c74-749d85f19103] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004239511s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-544423 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-544423 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-544423 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-544423 -n no-preload-544423
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-544423 -n no-preload-544423: exit status 2 (335.932981ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-544423 -n no-preload-544423
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-544423 -n no-preload-544423: exit status 2 (322.139064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-544423 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-544423 -n no-preload-544423
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-544423 -n no-preload-544423
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.18s)
E1001 19:45:19.656805  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/auto-096987/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:45:28.873647  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/default-k8s-diff-port-821041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:45:29.267340  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/addons-157757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:45:29.898466  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/auto-096987/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1001 19:40:08.617482  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m23.157352382s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-096987 "pgrep -a kubelet"
I1001 19:40:09.036588  290016 config.go:182] Loaded profile config "auto-096987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-096987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jlt5m" [db16fbc6-ee2c-4041-a234-c0e9e8544034] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jlt5m" [db16fbc6-ee2c-4041-a234-c0e9e8544034] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003687077s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-096987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1001 19:40:49.377718  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/default-k8s-diff-port-821041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1001 19:41:09.860043  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/default-k8s-diff-port-821041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m0.699889429s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-h5qsm" [ff36c7ff-0088-4c8a-b9b5-d2acca5cdc28] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003774764s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-096987 "pgrep -a kubelet"
I1001 19:41:18.635806  290016 config.go:182] Loaded profile config "kindnet-096987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-096987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7v4gb" [7bd6864f-c3ea-4af3-b7cb-7a17bd7c7008] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7v4gb" [7bd6864f-c3ea-4af3-b7cb-7a17bd7c7008] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003842886s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-096987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-hqwq7" [b7106d5b-3326-46a0-b080-a37ccb44f702] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-hqwq7" [b7106d5b-3326-46a0-b080-a37ccb44f702] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004172526s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-096987 "pgrep -a kubelet"
I1001 19:41:49.541620  290016 config.go:182] Loaded profile config "calico-096987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-096987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f4l87" [aae5ff61-9fb5-4edf-a0ca-a1ff73485440] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1001 19:41:50.822301  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/default-k8s-diff-port-821041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-f4l87" [aae5ff61-9fb5-4edf-a0ca-a1ff73485440] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004531624s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m1.858922991s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-096987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1001 19:42:52.459517  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/old-k8s-version-875226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m20.086771127s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-096987 "pgrep -a kubelet"
I1001 19:42:58.380866  290016 config.go:182] Loaded profile config "custom-flannel-096987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-096987 replace --force -f testdata/netcat-deployment.yaml
I1001 19:42:58.807418  290016 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-665sr" [8070779b-d23d-4989-81ad-03067bf79b1f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-665sr" [8070779b-d23d-4989-81ad-03067bf79b1f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004390465s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-096987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1001 19:43:33.378355  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/no-preload-544423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.096809757s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-096987 "pgrep -a kubelet"
I1001 19:43:49.656766  290016 config.go:182] Loaded profile config "enable-default-cni-096987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-096987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-76mf9" [cb6e4e5f-28b8-46cc-8f75-1246cc66cc9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1001 19:43:53.860057  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/no-preload-544423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-76mf9" [cb6e4e5f-28b8-46cc-8f75-1246cc66cc9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003832362s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-096987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1001 19:44:34.822347  290016 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/no-preload-544423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-096987 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m10.409106759s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-bqb99" [36e205d8-1c32-49ac-8684-70a5a795c6cf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004446461s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-096987 "pgrep -a kubelet"
I1001 19:44:41.500504  290016 config.go:182] Loaded profile config "flannel-096987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-096987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2pmc7" [29b6f3b9-1b0f-4cda-92b2-4c22cce1f586] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2pmc7" [29b6f3b9-1b0f-4cda-92b2-4c22cce1f586] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003166942s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-096987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-096987 "pgrep -a kubelet"
I1001 19:45:35.749265  290016 config.go:182] Loaded profile config "bridge-096987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-096987 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-djzvh" [36b0756d-a0bf-4b2b-b2f4-f0f8e8d59ce3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-djzvh" [36b0756d-a0bf-4b2b-b2f4-f0f8e8d59ce3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004623915s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-096987 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-096987 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (32/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-539864 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-539864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-539864
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-157757 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-334550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-334550
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-096987 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-096987" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-096987" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21631-288146/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 01 Oct 2025 19:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-537001
contexts:
- context:
cluster: kubernetes-upgrade-537001
extensions:
- extension:
last-update: Wed, 01 Oct 2025 19:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-537001
name: kubernetes-upgrade-537001
current-context: kubernetes-upgrade-537001
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-537001
user:
client-certificate: /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/kubernetes-upgrade-537001/client.crt
client-key: /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/kubernetes-upgrade-537001/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-096987

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-096987"

                                                
                                                
----------------------- debugLogs end: kubenet-096987 [took: 4.317246676s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-096987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-096987
--- SKIP: TestNetworkPlugins/group/kubenet (4.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-096987 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-096987" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21631-288146/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 01 Oct 2025 19:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-537001
contexts:
- context:
cluster: kubernetes-upgrade-537001
extensions:
- extension:
last-update: Wed, 01 Oct 2025 19:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-537001
name: kubernetes-upgrade-537001
current-context: kubernetes-upgrade-537001
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-537001
user:
client-certificate: /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/kubernetes-upgrade-537001/client.crt
client-key: /home/jenkins/minikube-integration/21631-288146/.minikube/profiles/kubernetes-upgrade-537001/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-096987

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-096987" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096987"

                                                
                                                
----------------------- debugLogs end: cilium-096987 [took: 4.174151565s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-096987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-096987
--- SKIP: TestNetworkPlugins/group/cilium (4.46s)

                                                
                                    
Copied to clipboard