Test Report: Docker_Linux_crio_arm64 21657

                    
                      666c3351e3298333ddd2e3f0587bd3e8ac91c0cd:2025-09-29:41679
                    
                

Test fail (7/326)

x
+
TestAddons/parallel/Ingress (156.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-718460 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-718460 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-718460 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a145b2eb-460e-415e-9720-a44b7a6ed478] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a145b2eb-460e-415e-9720-a44b7a6ed478] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00411818s
I0929 10:24:37.841472    4108 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718460 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.310947488s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-718460 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-718460
helpers_test.go:243: (dbg) docker inspect addons-718460:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2613950f87c2f553171eb6717dd64c983e396d0735f837d87083c6d52ff3b084",
	        "Created": "2025-09-29T10:20:35.445279562Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5264,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:20:35.50805324Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/2613950f87c2f553171eb6717dd64c983e396d0735f837d87083c6d52ff3b084/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2613950f87c2f553171eb6717dd64c983e396d0735f837d87083c6d52ff3b084/hostname",
	        "HostsPath": "/var/lib/docker/containers/2613950f87c2f553171eb6717dd64c983e396d0735f837d87083c6d52ff3b084/hosts",
	        "LogPath": "/var/lib/docker/containers/2613950f87c2f553171eb6717dd64c983e396d0735f837d87083c6d52ff3b084/2613950f87c2f553171eb6717dd64c983e396d0735f837d87083c6d52ff3b084-json.log",
	        "Name": "/addons-718460",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-718460:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-718460",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2613950f87c2f553171eb6717dd64c983e396d0735f837d87083c6d52ff3b084",
	                "LowerDir": "/var/lib/docker/overlay2/9b64349f6c5dca81ccedbb3530ac4c968770204da7147ac505fd413d28f271fe-init/diff:/var/lib/docker/overlay2/03dcb74e0e5b38ad12cb364793e3e5cf6f66af30c67c32b56aeac11291ac3658/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b64349f6c5dca81ccedbb3530ac4c968770204da7147ac505fd413d28f271fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b64349f6c5dca81ccedbb3530ac4c968770204da7147ac505fd413d28f271fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b64349f6c5dca81ccedbb3530ac4c968770204da7147ac505fd413d28f271fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-718460",
	                "Source": "/var/lib/docker/volumes/addons-718460/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-718460",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-718460",
	                "name.minikube.sigs.k8s.io": "addons-718460",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d42004edf48928e519397eb0efcd684145f7493fa870945612515e439029a275",
	            "SandboxKey": "/var/run/docker/netns/d42004edf489",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-718460": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:22:e8:c8:2f:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b48ded1c9b9da630f0fb76b1fd1b61375db268f6791336b9294e3a7d72109b0a",
	                    "EndpointID": "9712dd313c6cf09d3489a1b681a1373204a13f6e70a175a86c3928ff3f4bb48a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-718460",
	                        "2613950f87c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-718460 -n addons-718460
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-718460 logs -n 25: (1.646578617s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-283576                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-283576 │ jenkins │ v1.37.0 │ 29 Sep 25 10:20 UTC │ 29 Sep 25 10:20 UTC │
	│ start   │ --download-only -p binary-mirror-445148 --alsologtostderr --binary-mirror http://127.0.0.1:40861 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-445148   │ jenkins │ v1.37.0 │ 29 Sep 25 10:20 UTC │                     │
	│ delete  │ -p binary-mirror-445148                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-445148   │ jenkins │ v1.37.0 │ 29 Sep 25 10:20 UTC │ 29 Sep 25 10:20 UTC │
	│ addons  │ enable dashboard -p addons-718460                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-718460                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:20 UTC │                     │
	│ start   │ -p addons-718460 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:20 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-718460 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-718460 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-718460 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ ip      │ addons-718460 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-718460 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-718460 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-718460 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ enable headlamp -p addons-718460 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ ssh     │ addons-718460 ssh cat /opt/local-path-provisioner/pvc-1afac192-e0ae-4f4a-af4b-19ffc8f3bcd9_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-718460 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-718460 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-718460 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-718460 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-718460 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-718460 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-718460                                                                                                                                                                                                                                                                                                                                                                                           │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-718460 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ ssh     │ addons-718460 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │                     │
	│ ip      │ addons-718460 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-718460          │ jenkins │ v1.37.0 │ 29 Sep 25 10:26 UTC │ 29 Sep 25 10:26 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:20:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:20:10.327596    4864 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:20:10.327755    4864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:20:10.327772    4864 out.go:374] Setting ErrFile to fd 2...
	I0929 10:20:10.327778    4864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:20:10.328104    4864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	I0929 10:20:10.328724    4864 out.go:368] Setting JSON to false
	I0929 10:20:10.329588    4864 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":160,"bootTime":1759141051,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0929 10:20:10.329661    4864 start.go:140] virtualization:  
	I0929 10:20:10.332974    4864 out.go:179] * [addons-718460] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 10:20:10.336810    4864 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:20:10.336959    4864 notify.go:220] Checking for updates...
	I0929 10:20:10.342810    4864 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:20:10.345761    4864 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	I0929 10:20:10.348600    4864 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	I0929 10:20:10.351419    4864 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 10:20:10.354212    4864 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:20:10.357344    4864 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:20:10.387418    4864 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 10:20:10.387522    4864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:20:10.444817    4864 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-29 10:20:10.436016287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 10:20:10.444926    4864 docker.go:318] overlay module found
	I0929 10:20:10.448059    4864 out.go:179] * Using the docker driver based on user configuration
	I0929 10:20:10.451045    4864 start.go:304] selected driver: docker
	I0929 10:20:10.451081    4864 start.go:924] validating driver "docker" against <nil>
	I0929 10:20:10.451106    4864 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:20:10.452255    4864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:20:10.506300    4864 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-29 10:20:10.496754606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 10:20:10.506479    4864 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:20:10.506735    4864 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:20:10.509805    4864 out.go:179] * Using Docker driver with root privileges
	I0929 10:20:10.512685    4864 cni.go:84] Creating CNI manager for ""
	I0929 10:20:10.512756    4864 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:20:10.512771    4864 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 10:20:10.512851    4864 start.go:348] cluster config:
	{Name:addons-718460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-718460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 10:20:10.515900    4864 out.go:179] * Starting "addons-718460" primary control-plane node in "addons-718460" cluster
	I0929 10:20:10.518701    4864 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 10:20:10.521612    4864 out.go:179] * Pulling base image v0.0.48 ...
	I0929 10:20:10.524482    4864 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:20:10.524537    4864 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0929 10:20:10.524555    4864 cache.go:58] Caching tarball of preloaded images
	I0929 10:20:10.524570    4864 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:20:10.524646    4864 preload.go:172] Found /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0929 10:20:10.524656    4864 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 10:20:10.524979    4864 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/config.json ...
	I0929 10:20:10.525008    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/config.json: {Name:mk244bc95819beead629576759add80650a31a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:10.539343    4864 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:20:10.539466    4864 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:20:10.539484    4864 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:20:10.539489    4864 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:20:10.539496    4864 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:20:10.539502    4864 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 10:20:28.101965    4864 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 10:20:28.102004    4864 cache.go:232] Successfully downloaded all kic artifacts
	I0929 10:20:28.102053    4864 start.go:360] acquireMachinesLock for addons-718460: {Name:mk5cf0842e1bbc87e8986932a73170c10de0e0b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:20:28.102166    4864 start.go:364] duration metric: took 90.188µs to acquireMachinesLock for "addons-718460"
	I0929 10:20:28.102195    4864 start.go:93] Provisioning new machine with config: &{Name:addons-718460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-718460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:20:28.102261    4864 start.go:125] createHost starting for "" (driver="docker")
	I0929 10:20:28.105710    4864 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 10:20:28.105961    4864 start.go:159] libmachine.API.Create for "addons-718460" (driver="docker")
	I0929 10:20:28.106000    4864 client.go:168] LocalClient.Create starting
	I0929 10:20:28.106120    4864 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem
	I0929 10:20:28.530089    4864 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/cert.pem
	I0929 10:20:29.009467    4864 cli_runner.go:164] Run: docker network inspect addons-718460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 10:20:29.025468    4864 cli_runner.go:211] docker network inspect addons-718460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 10:20:29.025563    4864 network_create.go:284] running [docker network inspect addons-718460] to gather additional debugging logs...
	I0929 10:20:29.025589    4864 cli_runner.go:164] Run: docker network inspect addons-718460
	W0929 10:20:29.042351    4864 cli_runner.go:211] docker network inspect addons-718460 returned with exit code 1
	I0929 10:20:29.042384    4864 network_create.go:287] error running [docker network inspect addons-718460]: docker network inspect addons-718460: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-718460 not found
	I0929 10:20:29.042398    4864 network_create.go:289] output of [docker network inspect addons-718460]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-718460 not found
	
	** /stderr **
	I0929 10:20:29.042510    4864 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:20:29.059524    4864 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2ee00}
	I0929 10:20:29.059570    4864 network_create.go:124] attempt to create docker network addons-718460 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 10:20:29.059632    4864 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-718460 addons-718460
	I0929 10:20:29.115953    4864 network_create.go:108] docker network addons-718460 192.168.49.0/24 created
	I0929 10:20:29.115986    4864 kic.go:121] calculated static IP "192.168.49.2" for the "addons-718460" container
	I0929 10:20:29.116066    4864 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 10:20:29.130552    4864 cli_runner.go:164] Run: docker volume create addons-718460 --label name.minikube.sigs.k8s.io=addons-718460 --label created_by.minikube.sigs.k8s.io=true
	I0929 10:20:29.149099    4864 oci.go:103] Successfully created a docker volume addons-718460
	I0929 10:20:29.149206    4864 cli_runner.go:164] Run: docker run --rm --name addons-718460-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718460 --entrypoint /usr/bin/test -v addons-718460:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 10:20:31.226196    4864 cli_runner.go:217] Completed: docker run --rm --name addons-718460-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718460 --entrypoint /usr/bin/test -v addons-718460:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (2.076937792s)
	I0929 10:20:31.226231    4864 oci.go:107] Successfully prepared a docker volume addons-718460
	I0929 10:20:31.226262    4864 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:20:31.226282    4864 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 10:20:31.226352    4864 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718460:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 10:20:35.375945    4864 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718460:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.149557132s)
	I0929 10:20:35.375975    4864 kic.go:203] duration metric: took 4.1496901s to extract preloaded images to volume ...
	W0929 10:20:35.376132    4864 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 10:20:35.376247    4864 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 10:20:35.430526    4864 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-718460 --name addons-718460 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718460 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-718460 --network addons-718460 --ip 192.168.49.2 --volume addons-718460:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 10:20:35.768194    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Running}}
	I0929 10:20:35.793725    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:20:35.817345    4864 cli_runner.go:164] Run: docker exec addons-718460 stat /var/lib/dpkg/alternatives/iptables
	I0929 10:20:35.872493    4864 oci.go:144] the created container "addons-718460" has a running status.
	I0929 10:20:35.872519    4864 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa...
	I0929 10:20:36.317914    4864 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 10:20:36.343697    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:20:36.369745    4864 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 10:20:36.369765    4864 kic_runner.go:114] Args: [docker exec --privileged addons-718460 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 10:20:36.425011    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:20:36.452448    4864 machine.go:93] provisionDockerMachine start ...
	I0929 10:20:36.452544    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:20:36.479861    4864 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:36.480209    4864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0929 10:20:36.480219    4864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 10:20:36.650853    4864 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718460
	
	I0929 10:20:36.650879    4864 ubuntu.go:182] provisioning hostname "addons-718460"
	I0929 10:20:36.651011    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:20:36.670800    4864 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:36.671099    4864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0929 10:20:36.671111    4864 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-718460 && echo "addons-718460" | sudo tee /etc/hostname
	I0929 10:20:36.838609    4864 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718460
	
	I0929 10:20:36.838768    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:20:36.857982    4864 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:36.858386    4864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0929 10:20:36.858406    4864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-718460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-718460/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-718460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:20:37.008926    4864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:20:37.008954    4864 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21657-2306/.minikube CaCertPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21657-2306/.minikube}
	I0929 10:20:37.008988    4864 ubuntu.go:190] setting up certificates
	I0929 10:20:37.008998    4864 provision.go:84] configureAuth start
	I0929 10:20:37.009072    4864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718460
	I0929 10:20:37.028635    4864 provision.go:143] copyHostCerts
	I0929 10:20:37.028727    4864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21657-2306/.minikube/ca.pem (1082 bytes)
	I0929 10:20:37.028861    4864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21657-2306/.minikube/cert.pem (1123 bytes)
	I0929 10:20:37.028928    4864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21657-2306/.minikube/key.pem (1679 bytes)
	I0929 10:20:37.028985    4864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21657-2306/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca-key.pem org=jenkins.addons-718460 san=[127.0.0.1 192.168.49.2 addons-718460 localhost minikube]
	I0929 10:20:37.741634    4864 provision.go:177] copyRemoteCerts
	I0929 10:20:37.741709    4864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:20:37.741775    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:20:37.758952    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:20:37.860380    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:20:37.883821    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 10:20:37.907769    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:20:37.930747    4864 provision.go:87] duration metric: took 921.722681ms to configureAuth
	I0929 10:20:37.930770    4864 ubuntu.go:206] setting minikube options for container-runtime
	I0929 10:20:37.930952    4864 config.go:182] Loaded profile config "addons-718460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:37.931062    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:20:37.948135    4864 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:37.948432    4864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0929 10:20:37.948455    4864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 10:20:38.199997    4864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 10:20:38.200026    4864 machine.go:96] duration metric: took 1.747559845s to provisionDockerMachine
	I0929 10:20:38.200036    4864 client.go:171] duration metric: took 10.094026917s to LocalClient.Create
	I0929 10:20:38.200050    4864 start.go:167] duration metric: took 10.094089864s to libmachine.API.Create "addons-718460"
	I0929 10:20:38.200057    4864 start.go:293] postStartSetup for "addons-718460" (driver="docker")
	I0929 10:20:38.200067    4864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:20:38.200133    4864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:20:38.200179    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:20:38.218378    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:20:38.316407    4864 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:20:38.319550    4864 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 10:20:38.319584    4864 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 10:20:38.319596    4864 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 10:20:38.319602    4864 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 10:20:38.319612    4864 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-2306/.minikube/addons for local assets ...
	I0929 10:20:38.319676    4864 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-2306/.minikube/files for local assets ...
	I0929 10:20:38.319703    4864 start.go:296] duration metric: took 119.640248ms for postStartSetup
	I0929 10:20:38.320012    4864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718460
	I0929 10:20:38.337272    4864 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/config.json ...
	I0929 10:20:38.337560    4864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:20:38.337617    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:20:38.355027    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:20:38.452094    4864 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 10:20:38.456437    4864 start.go:128] duration metric: took 10.354160333s to createHost
	I0929 10:20:38.456457    4864 start.go:83] releasing machines lock for "addons-718460", held for 10.354278589s
	I0929 10:20:38.456537    4864 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718460
	I0929 10:20:38.474287    4864 ssh_runner.go:195] Run: cat /version.json
	I0929 10:20:38.474338    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:20:38.474591    4864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:20:38.474650    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:20:38.503392    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:20:38.504632    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:20:38.720024    4864 ssh_runner.go:195] Run: systemctl --version
	I0929 10:20:38.724255    4864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 10:20:38.864404    4864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 10:20:38.868544    4864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:20:38.889681    4864 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 10:20:38.889795    4864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:20:38.921495    4864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 10:20:38.921519    4864 start.go:495] detecting cgroup driver to use...
	I0929 10:20:38.921551    4864 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 10:20:38.921606    4864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:20:38.937535    4864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:20:38.948831    4864 docker.go:218] disabling cri-docker service (if available) ...
	I0929 10:20:38.948947    4864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 10:20:38.962822    4864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 10:20:38.978230    4864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 10:20:39.062792    4864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 10:20:39.155423    4864 docker.go:234] disabling docker service ...
	I0929 10:20:39.155563    4864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 10:20:39.176553    4864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 10:20:39.189049    4864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 10:20:39.270876    4864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 10:20:39.373158    4864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:20:39.385643    4864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:20:39.402346    4864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 10:20:39.402452    4864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:39.412114    4864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 10:20:39.412179    4864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:39.421709    4864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:39.431474    4864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:39.441577    4864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:20:39.450752    4864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:39.460426    4864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:39.476434    4864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:39.486412    4864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:20:39.494839    4864 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 10:20:39.494915    4864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 10:20:39.509219    4864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:20:39.517743    4864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:39.594977    4864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 10:20:39.711678    4864 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 10:20:39.711829    4864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 10:20:39.715324    4864 start.go:563] Will wait 60s for crictl version
	I0929 10:20:39.715394    4864 ssh_runner.go:195] Run: which crictl
	I0929 10:20:39.718500    4864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:20:39.758228    4864 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 10:20:39.758365    4864 ssh_runner.go:195] Run: crio --version
	I0929 10:20:39.801824    4864 ssh_runner.go:195] Run: crio --version
	I0929 10:20:39.847835    4864 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 10:20:39.850713    4864 cli_runner.go:164] Run: docker network inspect addons-718460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:20:39.867174    4864 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 10:20:39.870683    4864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:20:39.881284    4864 kubeadm.go:875] updating cluster {Name:addons-718460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-718460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:20:39.881408    4864 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:20:39.881471    4864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:20:39.965097    4864 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:20:39.965119    4864 crio.go:433] Images already preloaded, skipping extraction
	I0929 10:20:39.965173    4864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:20:40.013372    4864 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:20:40.013402    4864 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:20:40.013412    4864 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0929 10:20:40.013512    4864 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-718460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-718460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:20:40.013620    4864 ssh_runner.go:195] Run: crio config
	I0929 10:20:40.083269    4864 cni.go:84] Creating CNI manager for ""
	I0929 10:20:40.083290    4864 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:20:40.083300    4864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:20:40.083322    4864 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-718460 NodeName:addons-718460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:20:40.083452    4864 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-718460"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:20:40.083523    4864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:20:40.093564    4864 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:20:40.093655    4864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:20:40.103748    4864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 10:20:40.123855    4864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:20:40.143425    4864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0929 10:20:40.162673    4864 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 10:20:40.166222    4864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:20:40.177755    4864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:40.270597    4864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:20:40.284034    4864 certs.go:68] Setting up /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460 for IP: 192.168.49.2
	I0929 10:20:40.284064    4864 certs.go:194] generating shared ca certs ...
	I0929 10:20:40.284094    4864 certs.go:226] acquiring lock for ca certs: {Name:mkddeaa430ffcc39cce53e20ea2b5588c6828a36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:40.284248    4864 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21657-2306/.minikube/ca.key
	I0929 10:20:41.148008    4864 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-2306/.minikube/ca.crt ...
	I0929 10:20:41.148038    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/ca.crt: {Name:mk6cd5a2b2357025aef1cfdbc4de50ce372c90c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:41.148266    4864 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-2306/.minikube/ca.key ...
	I0929 10:20:41.148281    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/ca.key: {Name:mke8eedba85e2bd6cb6321b3771bc06a7c69648c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:41.148372    4864 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.key
	I0929 10:20:41.373338    4864 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.crt ...
	I0929 10:20:41.373367    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.crt: {Name:mk271a1753c83fc1bc090cbd15a01735b2528065 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:41.373550    4864 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.key ...
	I0929 10:20:41.373561    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.key: {Name:mkda31bbf2d1e63d03455253b4c03b75b671d901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:41.373646    4864 certs.go:256] generating profile certs ...
	I0929 10:20:41.373705    4864 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.key
	I0929 10:20:41.373724    4864 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt with IP's: []
	I0929 10:20:41.709478    4864 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt ...
	I0929 10:20:41.709509    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: {Name:mk4dcc5c63329c4211d48c526055c7dd776d835b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:41.709690    4864 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.key ...
	I0929 10:20:41.709702    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.key: {Name:mkb3930c5b14091649d2f8a0c36fa4ef3183aff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:41.709780    4864 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.key.2dc49556
	I0929 10:20:41.709803    4864 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.crt.2dc49556 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 10:20:42.985864    4864 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.crt.2dc49556 ...
	I0929 10:20:42.985895    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.crt.2dc49556: {Name:mk251430defdf1fadddf8151e5682739ad3bf497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:42.986122    4864 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.key.2dc49556 ...
	I0929 10:20:42.986139    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.key.2dc49556: {Name:mkb22891413792bdc64398bc44d2f6868f4c8d1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:42.986221    4864 certs.go:381] copying /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.crt.2dc49556 -> /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.crt
	I0929 10:20:42.986306    4864 certs.go:385] copying /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.key.2dc49556 -> /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.key
	I0929 10:20:42.986362    4864 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/proxy-client.key
	I0929 10:20:42.986381    4864 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/proxy-client.crt with IP's: []
	I0929 10:20:43.576411    4864 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/proxy-client.crt ...
	I0929 10:20:43.576446    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/proxy-client.crt: {Name:mkb85756f722a11dc8820c611d165f70ff9cbe4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:43.576633    4864 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/proxy-client.key ...
	I0929 10:20:43.576647    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/proxy-client.key: {Name:mk39f0e8444a3c9db050e0e376d3249c2ae18e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:43.576825    4864 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:20:43.576866    4864 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:20:43.576895    4864 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:20:43.576922    4864 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/key.pem (1679 bytes)
	I0929 10:20:43.577493    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:20:43.603946    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 10:20:43.629637    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:20:43.654382    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 10:20:43.678612    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:20:43.703497    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 10:20:43.727722    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:20:43.751964    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 10:20:43.776371    4864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:20:43.800864    4864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:20:43.818712    4864 ssh_runner.go:195] Run: openssl version
	I0929 10:20:43.824161    4864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:20:43.833906    4864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:43.837621    4864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:43.837714    4864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:43.844890    4864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:20:43.854456    4864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:20:43.857777    4864 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:20:43.857831    4864 kubeadm.go:392] StartCluster: {Name:addons-718460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-718460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:20:43.857904    4864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 10:20:43.857964    4864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 10:20:43.896047    4864 cri.go:89] found id: ""
	I0929 10:20:43.896173    4864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:20:43.905044    4864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:20:43.913972    4864 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 10:20:43.914039    4864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:20:43.923408    4864 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:20:43.923429    4864 kubeadm.go:157] found existing configuration files:
	
	I0929 10:20:43.923499    4864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:20:43.932501    4864 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:20:43.932591    4864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:20:43.941263    4864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:20:43.949973    4864 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:20:43.950040    4864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:20:43.958340    4864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:20:43.968632    4864 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:20:43.968745    4864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:20:43.980156    4864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:20:43.989952    4864 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:20:43.990015    4864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:20:44.000825    4864 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 10:20:44.050438    4864 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:20:44.050552    4864 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:20:44.071927    4864 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 10:20:44.072074    4864 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0929 10:20:44.072130    4864 kubeadm.go:310] OS: Linux
	I0929 10:20:44.072216    4864 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 10:20:44.072298    4864 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0929 10:20:44.072378    4864 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 10:20:44.072460    4864 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 10:20:44.072540    4864 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 10:20:44.072620    4864 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 10:20:44.072697    4864 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 10:20:44.072777    4864 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 10:20:44.072865    4864 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0929 10:20:44.136288    4864 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:20:44.136467    4864 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:20:44.136600    4864 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:20:44.147507    4864 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:20:44.154062    4864 out.go:252]   - Generating certificates and keys ...
	I0929 10:20:44.154212    4864 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:20:44.154297    4864 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:20:44.574258    4864 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:20:44.842270    4864 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:20:45.300088    4864 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:20:46.111962    4864 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:20:46.185870    4864 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:20:46.186184    4864 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-718460 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:20:46.598640    4864 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:20:46.598780    4864 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-718460 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:20:47.409807    4864 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:20:47.718324    4864 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:20:48.410415    4864 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:20:48.410628    4864 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:20:49.001821    4864 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:20:49.632380    4864 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:20:49.826828    4864 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:20:49.944003    4864 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:20:50.605198    4864 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:20:50.605991    4864 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:20:50.608716    4864 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:20:50.612151    4864 out.go:252]   - Booting up control plane ...
	I0929 10:20:50.612264    4864 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:20:50.612352    4864 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:20:50.612730    4864 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:20:50.622690    4864 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:20:50.622802    4864 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:20:50.629477    4864 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:20:50.629582    4864 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:20:50.629624    4864 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:20:50.715891    4864 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:20:50.716020    4864 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:20:51.715401    4864 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000717129s
	I0929 10:20:51.719067    4864 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:20:51.719193    4864 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 10:20:51.719300    4864 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:20:51.719396    4864 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:20:54.212007    4864 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.492304594s
	I0929 10:20:56.239052    4864 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.51996589s
	I0929 10:20:58.223046    4864 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.501803442s
	I0929 10:20:58.240623    4864 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:20:58.259917    4864 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:20:58.274696    4864 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:20:58.274904    4864 kubeadm.go:310] [mark-control-plane] Marking the node addons-718460 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:20:58.292379    4864 kubeadm.go:310] [bootstrap-token] Using token: 31q12d.1r9ru1d0c3pc5xbq
	I0929 10:20:58.297573    4864 out.go:252]   - Configuring RBAC rules ...
	I0929 10:20:58.297711    4864 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:20:58.301263    4864 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:20:58.309192    4864 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:20:58.312830    4864 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:20:58.316559    4864 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:20:58.323147    4864 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:20:58.629560    4864 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:20:59.069315    4864 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:20:59.628563    4864 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:20:59.628588    4864 kubeadm.go:310] 
	I0929 10:20:59.628658    4864 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:20:59.628671    4864 kubeadm.go:310] 
	I0929 10:20:59.628753    4864 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:20:59.628763    4864 kubeadm.go:310] 
	I0929 10:20:59.628790    4864 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:20:59.628855    4864 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:20:59.628915    4864 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:20:59.628924    4864 kubeadm.go:310] 
	I0929 10:20:59.628981    4864 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:20:59.628990    4864 kubeadm.go:310] 
	I0929 10:20:59.629040    4864 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:20:59.629048    4864 kubeadm.go:310] 
	I0929 10:20:59.629103    4864 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:20:59.629185    4864 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:20:59.629261    4864 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:20:59.629274    4864 kubeadm.go:310] 
	I0929 10:20:59.629367    4864 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:20:59.629451    4864 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:20:59.629459    4864 kubeadm.go:310] 
	I0929 10:20:59.629547    4864 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 31q12d.1r9ru1d0c3pc5xbq \
	I0929 10:20:59.629658    4864 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:392fe149ecb175ae356dba308b7f8297c4b5919f46577a9f98ac6b1b62a4c584 \
	I0929 10:20:59.629684    4864 kubeadm.go:310] 	--control-plane 
	I0929 10:20:59.629692    4864 kubeadm.go:310] 
	I0929 10:20:59.629781    4864 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:20:59.629789    4864 kubeadm.go:310] 
	I0929 10:20:59.629875    4864 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 31q12d.1r9ru1d0c3pc5xbq \
	I0929 10:20:59.629987    4864 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:392fe149ecb175ae356dba308b7f8297c4b5919f46577a9f98ac6b1b62a4c584 
	I0929 10:20:59.632280    4864 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0929 10:20:59.632510    4864 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0929 10:20:59.632615    4864 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:20:59.632631    4864 cni.go:84] Creating CNI manager for ""
	I0929 10:20:59.632638    4864 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:20:59.635841    4864 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 10:20:59.638745    4864 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 10:20:59.642366    4864 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 10:20:59.642385    4864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 10:20:59.663722    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 10:20:59.945242    4864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:20:59.945395    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:59.945469    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-718460 minikube.k8s.io/updated_at=2025_09_29T10_20_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170 minikube.k8s.io/name=addons-718460 minikube.k8s.io/primary=true
	I0929 10:21:00.133842    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:21:00.133909    4864 ops.go:34] apiserver oom_adj: -16
	I0929 10:21:00.634413    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:21:01.133890    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:21:01.634517    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:21:02.134096    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:21:02.634884    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:21:03.134180    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:21:03.634020    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:21:04.134646    4864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:21:04.253464    4864 kubeadm.go:1105] duration metric: took 4.308114958s to wait for elevateKubeSystemPrivileges
	I0929 10:21:04.253495    4864 kubeadm.go:394] duration metric: took 20.395661911s to StartCluster
	I0929 10:21:04.253513    4864 settings.go:142] acquiring lock: {Name:mk5a393e91300013a868ee870b6bf3cfd60dd530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:21:04.253650    4864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21657-2306/kubeconfig
	I0929 10:21:04.254150    4864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/kubeconfig: {Name:mk74c1842d39026f9853151eb440c757ec3be664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:21:04.254377    4864 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:21:04.254586    4864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:21:04.254879    4864 config.go:182] Loaded profile config "addons-718460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:21:04.254910    4864 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:21:04.254999    4864 addons.go:69] Setting yakd=true in profile "addons-718460"
	I0929 10:21:04.255019    4864 addons.go:238] Setting addon yakd=true in "addons-718460"
	I0929 10:21:04.255048    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.255896    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.256176    4864 addons.go:69] Setting inspektor-gadget=true in profile "addons-718460"
	I0929 10:21:04.256192    4864 addons.go:238] Setting addon inspektor-gadget=true in "addons-718460"
	I0929 10:21:04.256217    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.256686    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.258814    4864 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-718460"
	I0929 10:21:04.258888    4864 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-718460"
	I0929 10:21:04.258930    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.259312    4864 addons.go:69] Setting metrics-server=true in profile "addons-718460"
	I0929 10:21:04.259450    4864 addons.go:238] Setting addon metrics-server=true in "addons-718460"
	I0929 10:21:04.259520    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.260070    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.263282    4864 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-718460"
	I0929 10:21:04.266140    4864 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-718460"
	I0929 10:21:04.266185    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.266670    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.263296    4864 addons.go:69] Setting registry=true in profile "addons-718460"
	I0929 10:21:04.275712    4864 addons.go:238] Setting addon registry=true in "addons-718460"
	I0929 10:21:04.275754    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.276270    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.263300    4864 addons.go:69] Setting registry-creds=true in profile "addons-718460"
	I0929 10:21:04.276664    4864 addons.go:238] Setting addon registry-creds=true in "addons-718460"
	I0929 10:21:04.276698    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.277170    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.263303    4864 addons.go:69] Setting storage-provisioner=true in profile "addons-718460"
	I0929 10:21:04.300556    4864 addons.go:238] Setting addon storage-provisioner=true in "addons-718460"
	I0929 10:21:04.300642    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.301338    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.263306    4864 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-718460"
	I0929 10:21:04.301846    4864 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-718460"
	I0929 10:21:04.302241    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.263309    4864 addons.go:69] Setting volcano=true in profile "addons-718460"
	I0929 10:21:04.331402    4864 addons.go:238] Setting addon volcano=true in "addons-718460"
	I0929 10:21:04.332040    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.332742    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.263312    4864 addons.go:69] Setting volumesnapshots=true in profile "addons-718460"
	I0929 10:21:04.333528    4864 addons.go:238] Setting addon volumesnapshots=true in "addons-718460"
	I0929 10:21:04.333572    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.334098    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.263350    4864 out.go:179] * Verifying Kubernetes components...
	I0929 10:21:04.263469    4864 addons.go:69] Setting gcp-auth=true in profile "addons-718460"
	I0929 10:21:04.348770    4864 mustload.go:65] Loading cluster: addons-718460
	I0929 10:21:04.349015    4864 config.go:182] Loaded profile config "addons-718460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:21:04.349329    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.263474    4864 addons.go:69] Setting cloud-spanner=true in profile "addons-718460"
	I0929 10:21:04.364087    4864 addons.go:238] Setting addon cloud-spanner=true in "addons-718460"
	I0929 10:21:04.364172    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.263478    4864 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-718460"
	I0929 10:21:04.263481    4864 addons.go:69] Setting default-storageclass=true in profile "addons-718460"
	I0929 10:21:04.263488    4864 addons.go:69] Setting ingress-dns=true in profile "addons-718460"
	I0929 10:21:04.263493    4864 addons.go:69] Setting ingress=true in profile "addons-718460"
	I0929 10:21:04.265812    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.364489    4864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:21:04.367650    4864 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-718460"
	I0929 10:21:04.367697    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.368162    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.377010    4864 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:21:04.390606    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.390627    4864 addons.go:238] Setting addon ingress-dns=true in "addons-718460"
	I0929 10:21:04.399942    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.400559    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.390645    4864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-718460"
	I0929 10:21:04.390654    4864 addons.go:238] Setting addon ingress=true in "addons-718460"
	I0929 10:21:04.403910    4864 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:21:04.404062    4864 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:21:04.404160    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.404490    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.403923    4864 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:21:04.439936    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.440758    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.474294    4864 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:21:04.474314    4864 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:21:04.474399    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.542950    4864 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:21:04.546213    4864 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:21:04.546238    4864 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:21:04.546321    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.548119    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.579349    4864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:21:04.596315    4864 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:21:04.599353    4864 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:21:04.600118    4864 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:21:04.640203    4864 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-718460"
	I0929 10:21:04.640243    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.640766    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.657625    4864 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:21:04.657646    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:21:04.657708    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.683209    4864 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:21:04.683324    4864 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:21:04.687449    4864 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:21:04.687474    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:21:04.687545    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.689413    4864 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:21:04.689437    4864 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:21:04.689509    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.708252    4864 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:21:04.710810    4864 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:21:04.710828    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:21:04.710893    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.712505    4864 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:21:04.712524    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:21:04.712586    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.733808    4864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	W0929 10:21:04.743459    4864 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 10:21:04.747195    4864 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:21:04.747409    4864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:21:04.747569    4864 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:21:04.747735    4864 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:21:04.751222    4864 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:21:04.751253    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:21:04.751333    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.772762    4864 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:21:04.772833    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:21:04.772913    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.777187    4864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:21:04.782219    4864 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:21:04.785247    4864 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:21:04.788064    4864 addons.go:238] Setting addon default-storageclass=true in "addons-718460"
	I0929 10:21:04.788121    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:04.788620    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:04.815048    4864 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:21:04.815146    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:21:04.815258    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.832927    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:04.839628    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:04.843243    4864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:21:04.844275    4864 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:21:04.848156    4864 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:21:04.851292    4864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:21:04.853170    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:04.855849    4864 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:21:04.866034    4864 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:21:04.866063    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:21:04.866139    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.866311    4864 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:21:04.869243    4864 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:21:04.869329    4864 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:21:04.869344    4864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:21:04.869416    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.875260    4864 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:21:04.878194    4864 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:21:04.878216    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:21:04.878293    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:04.913145    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:04.932900    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:04.955447    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:04.959734    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:04.974047    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:05.013770    4864 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:21:05.013794    4864 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:21:05.015814    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:05.023161    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:05.059169    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:05.069117    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:05.070130    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:05.070672    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	W0929 10:21:05.078847    4864 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 10:21:05.078907    4864 retry.go:31] will retry after 202.142263ms: ssh: handshake failed: EOF
	W0929 10:21:05.078998    4864 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 10:21:05.079005    4864 retry.go:31] will retry after 213.45486ms: ssh: handshake failed: EOF
	I0929 10:21:05.095831    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	W0929 10:21:05.097369    4864 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 10:21:05.097395    4864 retry.go:31] will retry after 356.536073ms: ssh: handshake failed: EOF
	I0929 10:21:05.101867    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:05.159641    4864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:21:05.284536    4864 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:21:05.284620    4864 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:21:05.310584    4864 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:05.310664    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:21:05.371625    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:21:05.402028    4864 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:21:05.402108    4864 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:21:05.416695    4864 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:21:05.416789    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:21:05.474571    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:21:05.479043    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:21:05.482200    4864 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:21:05.482279    4864 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:21:05.484748    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:05.530839    4864 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:21:05.530928    4864 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:21:05.559911    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:21:05.588488    4864 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:21:05.588603    4864 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:21:05.632524    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:21:05.637347    4864 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:21:05.637426    4864 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:21:05.645317    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:21:05.651421    4864 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:21:05.651495    4864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:21:05.672635    4864 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:21:05.672709    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:21:05.700044    4864 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:21:05.700149    4864 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:21:05.777505    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:21:05.819474    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:21:05.822435    4864 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:21:05.822503    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:21:05.825959    4864 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:21:05.826058    4864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:21:05.837327    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:21:05.909321    4864 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:21:05.909410    4864 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:21:05.919584    4864 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:21:05.919663    4864 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:21:06.014168    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:21:06.034088    4864 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:21:06.034163    4864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:21:06.042559    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:21:06.184247    4864 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:21:06.184314    4864 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:21:06.188876    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:21:06.237298    4864 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:21:06.237375    4864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:21:06.357388    4864 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:21:06.357460    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:21:06.372111    4864 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:21:06.372193    4864 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:21:06.612431    4864 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:21:06.612507    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:21:06.659866    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:21:06.789291    4864 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:21:06.789365    4864 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:21:06.939893    4864 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:21:06.939964    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:21:07.035595    4864 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:21:07.035666    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:21:07.195551    4864 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:21:07.195620    4864 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:21:07.332030    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:21:07.753755    4864 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.594078927s)
	I0929 10:21:07.754153    4864 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.174774695s)
	I0929 10:21:07.754304    4864 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 10:21:07.755387    4864 node_ready.go:35] waiting up to 6m0s for node "addons-718460" to be "Ready" ...
	I0929 10:21:08.129058    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.757400146s)
	I0929 10:21:08.577992    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.103331616s)
	I0929 10:21:08.645346    4864 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-718460" context rescaled to 1 replicas
	W0929 10:21:09.794545    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:10.335451    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.775457593s)
	I0929 10:21:10.335520    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.702915567s)
	I0929 10:21:10.335542    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.690155216s)
	I0929 10:21:10.335725    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.558199099s)
	I0929 10:21:10.335770    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.516214781s)
	I0929 10:21:10.335811    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.498406416s)
	I0929 10:21:10.335969    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.856702977s)
	I0929 10:21:10.335996    4864 addons.go:479] Verifying addon ingress=true in "addons-718460"
	I0929 10:21:10.336112    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.850614835s)
	W0929 10:21:10.336152    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:10.336205    4864 retry.go:31] will retry after 335.974214ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:10.336294    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.322046599s)
	I0929 10:21:10.336460    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.293826964s)
	I0929 10:21:10.336493    4864 addons.go:479] Verifying addon registry=true in "addons-718460"
	I0929 10:21:10.336538    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.147581968s)
	I0929 10:21:10.336566    4864 addons.go:479] Verifying addon metrics-server=true in "addons-718460"
	I0929 10:21:10.339171    4864 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-718460 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:21:10.341330    4864 out.go:179] * Verifying ingress addon...
	I0929 10:21:10.341437    4864 out.go:179] * Verifying registry addon...
	I0929 10:21:10.346712    4864 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:21:10.346850    4864 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:21:10.356419    4864 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:21:10.356439    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.357778    4864 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:21:10.357843    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:21:10.365812    4864 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0929 10:21:10.396610    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.736664617s)
	W0929 10:21:10.396709    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:21:10.396740    4864 retry.go:31] will retry after 159.28054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:21:10.556611    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:21:10.672388    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:10.956422    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.956667    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.119346    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.787221679s)
	I0929 10:21:11.119431    4864 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-718460"
	I0929 10:21:11.124457    4864 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:21:11.128149    4864 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:21:11.189711    4864 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:21:11.189786    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:11.352120    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.352242    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:11.632502    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:11.850832    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:11.851047    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:12.132252    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:12.259070    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:12.350758    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.351001    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:12.632249    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:12.851037    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:12.851120    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:13.137986    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:13.352674    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:13.352804    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:13.528109    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.971402963s)
	I0929 10:21:13.528233    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.855764743s)
	W0929 10:21:13.528344    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:13.528377    4864 retry.go:31] will retry after 547.053142ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:13.631777    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:13.850724    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:13.851156    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:14.076604    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:14.132385    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:14.351994    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:14.353151    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.632134    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:14.758409    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:14.852590    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:14.853097    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:21:14.906700    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:14.906762    4864 retry.go:31] will retry after 566.603883ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:15.131752    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.350994    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:15.351174    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:15.474451    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:15.632664    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.852032    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:15.852530    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:15.878976    4864 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:21:15.879105    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:15.898752    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:16.034594    4864 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:21:16.054489    4864 addons.go:238] Setting addon gcp-auth=true in "addons-718460"
	I0929 10:21:16.054534    4864 host.go:66] Checking if "addons-718460" exists ...
	I0929 10:21:16.055001    4864 cli_runner.go:164] Run: docker container inspect addons-718460 --format={{.State.Status}}
	I0929 10:21:16.079374    4864 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:21:16.079442    4864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718460
	I0929 10:21:16.098977    4864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/addons-718460/id_rsa Username:docker}
	I0929 10:21:16.133007    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:16.352157    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:16.352511    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:21:16.362252    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:16.362283    4864 retry.go:31] will retry after 918.313088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:16.365896    4864 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:21:16.368866    4864 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:21:16.371588    4864 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:21:16.371613    4864 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:21:16.390394    4864 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:21:16.390413    4864 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:21:16.408357    4864 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:21:16.408380    4864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:21:16.427063    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:21:16.631636    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:16.758602    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:16.856233    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:16.856423    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:16.918271    4864 addons.go:479] Verifying addon gcp-auth=true in "addons-718460"
	I0929 10:21:16.921504    4864 out.go:179] * Verifying gcp-auth addon...
	I0929 10:21:16.925153    4864 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:21:16.953198    4864 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:21:16.953227    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:17.131320    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:17.281484    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:17.353920    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:17.356073    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.427910    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:17.634464    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:17.852528    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.853362    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:17.928656    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:21:18.111098    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:18.111147    4864 retry.go:31] will retry after 1.410572728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:18.132527    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:18.350414    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:18.350584    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:18.428385    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.631591    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:18.851535    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:18.851764    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:18.928544    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:19.132790    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:19.259051    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:19.350996    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:19.351182    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:19.429000    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:19.522153    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:19.632380    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:19.853025    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:19.853169    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:19.929512    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:20.132133    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:20.339232    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:20.339265    4864 retry.go:31] will retry after 1.885222009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:20.350441    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:20.350509    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.429115    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:20.631335    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:20.850934    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.851371    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:20.928038    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:21.131878    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:21.350547    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:21.350678    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:21.428278    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:21.631453    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:21.758738    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:21.850533    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:21.850899    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:21.928681    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:22.132121    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:22.225341    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:22.351432    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:22.352502    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.429457    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:22.632382    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:22.851932    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:22.852742    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.928385    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:21:23.032086    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:23.032178    4864 retry.go:31] will retry after 4.255548074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:23.131357    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:23.350206    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:23.350335    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:23.428250    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.631469    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:23.850980    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:23.851399    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:23.928067    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:24.131710    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:24.258637    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:24.350027    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:24.350201    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:24.428019    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:24.631339    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:24.850246    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:24.850317    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:24.928124    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:25.131044    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:25.351794    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:25.351915    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:25.428903    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:25.632353    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:25.849918    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:25.850028    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:25.929005    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:26.132143    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:26.350842    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:26.351217    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:26.428884    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:26.631928    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:26.758674    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:26.849843    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:26.850055    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:26.928619    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:27.131610    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:27.288287    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:27.356663    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:27.357911    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:27.428914    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:27.632140    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:27.852073    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:27.852511    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:27.931310    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:28.133531    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:28.137872    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:28.137900    4864 retry.go:31] will retry after 5.272244909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:28.351180    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:28.351359    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:28.428238    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:28.631246    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:28.760205    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:28.851165    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:28.851414    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:28.928323    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:29.131276    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:29.349551    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:29.349998    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:29.429007    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:29.630885    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:29.853183    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:29.853244    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:29.928148    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:30.132533    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:30.350820    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:30.350956    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:30.429046    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:30.631957    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:30.850583    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:30.850635    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:30.928203    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:31.131976    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:31.258524    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:31.350669    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:31.351502    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:31.428360    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:31.631428    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:31.850386    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:31.850517    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:31.928204    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:32.131311    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:32.350333    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:32.350475    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:32.428235    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:32.631452    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:32.850364    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:32.850533    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:32.928487    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:33.131538    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:33.259059    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:33.350328    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:33.350540    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:33.410465    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:33.428877    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:33.632210    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:33.852776    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:33.853411    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:33.928389    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:34.134405    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:34.263476    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:34.263530    4864 retry.go:31] will retry after 4.892455088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:34.351411    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:34.351697    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:34.428503    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:34.631712    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:34.850573    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:34.851052    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:34.929129    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:35.131239    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:35.351014    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:35.351280    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:35.428971    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:35.631965    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:35.758817    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:35.850156    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:35.850456    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:35.928010    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:36.131981    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:36.349701    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:36.350009    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:36.428955    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:36.631687    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:36.850918    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:36.851189    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:36.928890    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:37.131881    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:37.350720    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:37.351083    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:37.428852    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:37.631842    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:37.759049    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:37.850491    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:37.850844    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:37.928990    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:38.131231    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:38.349955    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:38.350193    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:38.428100    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:38.631563    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:38.850087    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:38.850242    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:38.928044    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.130914    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:39.156143    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:39.350526    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:39.350677    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:39.428786    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.633911    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:39.851170    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:39.851726    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:39.929441    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:21:39.974308    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:39.974338    4864 retry.go:31] will retry after 7.729853104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:40.131325    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:40.258043    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:40.350252    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:40.350510    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:40.428220    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:40.631274    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:40.851056    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:40.851065    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:40.928534    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:41.131538    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:41.350620    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:41.350940    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:41.428685    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:41.632024    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:41.850699    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:41.850934    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:41.928681    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:42.134651    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:42.259159    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:42.350760    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:42.350987    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:42.429010    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:42.631006    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:42.850063    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:42.850341    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:42.928181    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:43.131026    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:43.349855    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:43.350136    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:43.428122    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:43.631282    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:43.850139    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:43.850330    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:43.928168    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:44.130939    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:44.349921    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:44.350445    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:44.428228    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:44.631684    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:44.759110    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:44.850311    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:44.850918    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:44.928807    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:45.136412    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:45.352211    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:45.355294    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:45.428436    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:45.631651    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:45.851172    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:45.851384    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:45.928920    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:46.131946    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:46.349610    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:46.349925    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.428913    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:46.631957    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:46.850652    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:46.851270    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.928237    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.131226    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:47.259391    4864 node_ready.go:57] node "addons-718460" has "Ready":"False" status (will retry)
	I0929 10:21:47.350504    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:47.350699    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:47.428791    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.631894    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:47.705044    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:47.948425    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:47.957112    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:48.091418    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:48.168243    4864 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:21:48.168316    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:48.375643    4864 node_ready.go:49] node "addons-718460" is "Ready"
	I0929 10:21:48.375719    4864 node_ready.go:38] duration metric: took 40.620268928s for node "addons-718460" to be "Ready" ...
	I0929 10:21:48.375748    4864 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:21:48.375827    4864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:21:48.396976    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:48.398155    4864 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:21:48.398214    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:48.452725    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:48.647150    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:48.902994    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:48.926155    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:48.958403    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:49.193224    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:49.300094    4864 api_server.go:72] duration metric: took 45.045687688s to wait for apiserver process to appear ...
	I0929 10:21:49.300204    4864 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:21:49.300243    4864 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 10:21:49.300138    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.595062176s)
	W0929 10:21:49.300371    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:49.300412    4864 retry.go:31] will retry after 7.528042919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:49.340182    4864 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 10:21:49.349276    4864 api_server.go:141] control plane version: v1.34.0
	I0929 10:21:49.349300    4864 api_server.go:131] duration metric: took 49.0697ms to wait for apiserver health ...
	I0929 10:21:49.349309    4864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:21:49.404392    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:49.406356    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:49.406947    4864 system_pods.go:59] 19 kube-system pods found
	I0929 10:21:49.407006    4864 system_pods.go:61] "coredns-66bc5c9577-bbxfm" [1ab9139a-c748-435d-9625-f2f2192694c3] Pending
	I0929 10:21:49.407032    4864 system_pods.go:61] "csi-hostpath-attacher-0" [958beda8-5e13-4363-a1d6-b5cd482a5cbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:21:49.407073    4864 system_pods.go:61] "csi-hostpath-resizer-0" [ea0f9b02-073e-4d8d-99ba-cc8335a30d57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:21:49.407094    4864 system_pods.go:61] "csi-hostpathplugin-brrn2" [71f899e6-1ac1-49a8-83d8-af6a5318a1a8] Pending
	I0929 10:21:49.407115    4864 system_pods.go:61] "etcd-addons-718460" [6d955d0a-3cee-4af8-bb88-e0c4bd1da370] Running
	I0929 10:21:49.407167    4864 system_pods.go:61] "kindnet-9x6tm" [17c04c98-7bd7-41ee-afc0-6c790305ce3c] Running
	I0929 10:21:49.407188    4864 system_pods.go:61] "kube-apiserver-addons-718460" [93b6cef5-bdb2-4b18-8e1b-1ac07be12138] Running
	I0929 10:21:49.407207    4864 system_pods.go:61] "kube-controller-manager-addons-718460" [531ff0d3-8d38-4fef-bfa4-5c395218c74f] Running
	I0929 10:21:49.407342    4864 system_pods.go:61] "kube-ingress-dns-minikube" [582a3d23-f7fc-41d8-8dc7-0cbb957e8900] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:21:49.407363    4864 system_pods.go:61] "kube-proxy-6ln4j" [304d67a8-0fbf-46c7-9a5b-36da788ef137] Running
	I0929 10:21:49.407382    4864 system_pods.go:61] "kube-scheduler-addons-718460" [2fda38af-2d54-4daf-a2b5-35cdf3fb44b6] Running
	I0929 10:21:49.407410    4864 system_pods.go:61] "metrics-server-85b7d694d7-x77fq" [bc238f5c-4424-4932-8e89-26565d9fe1f2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:21:49.407441    4864 system_pods.go:61] "nvidia-device-plugin-daemonset-79j7x" [6a19a5b3-4a20-4806-9b2c-9b12c44d4be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:21:49.407465    4864 system_pods.go:61] "registry-66898fdd98-57h5w" [4eda4165-6bbd-4182-9ac0-af4c38cfe95f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:21:49.407495    4864 system_pods.go:61] "registry-creds-764b6fb674-4fzwl" [0c56d9a6-d1d3-47d5-818d-9c40431e5cf6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:21:49.407513    4864 system_pods.go:61] "registry-proxy-g59x4" [41977be3-9774-4847-9974-3c460f42c342] Pending
	I0929 10:21:49.407537    4864 system_pods.go:61] "snapshot-controller-7d9fbc56b8-cnvrp" [30f17c07-6399-415b-8d2f-067c52f3719b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:49.407567    4864 system_pods.go:61] "snapshot-controller-7d9fbc56b8-r4bnf" [0703cc38-418f-428c-9231-050f4e56adc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:49.407590    4864 system_pods.go:61] "storage-provisioner" [6b0e933e-79ef-4387-9d57-244f74346b34] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:21:49.407610    4864 system_pods.go:74] duration metric: took 58.294697ms to wait for pod list to return data ...
	I0929 10:21:49.407647    4864 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:21:49.491864    4864 default_sa.go:45] found service account: "default"
	I0929 10:21:49.491943    4864 default_sa.go:55] duration metric: took 84.277793ms for default service account to be created ...
	I0929 10:21:49.491977    4864 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:21:49.531799    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:49.555550    4864 system_pods.go:86] 19 kube-system pods found
	I0929 10:21:49.555639    4864 system_pods.go:89] "coredns-66bc5c9577-bbxfm" [1ab9139a-c748-435d-9625-f2f2192694c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:21:49.555664    4864 system_pods.go:89] "csi-hostpath-attacher-0" [958beda8-5e13-4363-a1d6-b5cd482a5cbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:21:49.555701    4864 system_pods.go:89] "csi-hostpath-resizer-0" [ea0f9b02-073e-4d8d-99ba-cc8335a30d57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:21:49.555726    4864 system_pods.go:89] "csi-hostpathplugin-brrn2" [71f899e6-1ac1-49a8-83d8-af6a5318a1a8] Pending
	I0929 10:21:49.555745    4864 system_pods.go:89] "etcd-addons-718460" [6d955d0a-3cee-4af8-bb88-e0c4bd1da370] Running
	I0929 10:21:49.555776    4864 system_pods.go:89] "kindnet-9x6tm" [17c04c98-7bd7-41ee-afc0-6c790305ce3c] Running
	I0929 10:21:49.555799    4864 system_pods.go:89] "kube-apiserver-addons-718460" [93b6cef5-bdb2-4b18-8e1b-1ac07be12138] Running
	I0929 10:21:49.555817    4864 system_pods.go:89] "kube-controller-manager-addons-718460" [531ff0d3-8d38-4fef-bfa4-5c395218c74f] Running
	I0929 10:21:49.555836    4864 system_pods.go:89] "kube-ingress-dns-minikube" [582a3d23-f7fc-41d8-8dc7-0cbb957e8900] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:21:49.555866    4864 system_pods.go:89] "kube-proxy-6ln4j" [304d67a8-0fbf-46c7-9a5b-36da788ef137] Running
	I0929 10:21:49.555891    4864 system_pods.go:89] "kube-scheduler-addons-718460" [2fda38af-2d54-4daf-a2b5-35cdf3fb44b6] Running
	I0929 10:21:49.555911    4864 system_pods.go:89] "metrics-server-85b7d694d7-x77fq" [bc238f5c-4424-4932-8e89-26565d9fe1f2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:21:49.555945    4864 system_pods.go:89] "nvidia-device-plugin-daemonset-79j7x" [6a19a5b3-4a20-4806-9b2c-9b12c44d4be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:21:49.555969    4864 system_pods.go:89] "registry-66898fdd98-57h5w" [4eda4165-6bbd-4182-9ac0-af4c38cfe95f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:21:49.555989    4864 system_pods.go:89] "registry-creds-764b6fb674-4fzwl" [0c56d9a6-d1d3-47d5-818d-9c40431e5cf6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:21:49.556017    4864 system_pods.go:89] "registry-proxy-g59x4" [41977be3-9774-4847-9974-3c460f42c342] Pending
	I0929 10:21:49.556041    4864 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cnvrp" [30f17c07-6399-415b-8d2f-067c52f3719b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:49.556062    4864 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r4bnf" [0703cc38-418f-428c-9231-050f4e56adc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:49.556081    4864 system_pods.go:89] "storage-provisioner" [6b0e933e-79ef-4387-9d57-244f74346b34] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:21:49.556126    4864 retry.go:31] will retry after 254.0627ms: missing components: kube-dns
	I0929 10:21:49.659577    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:49.820431    4864 system_pods.go:86] 19 kube-system pods found
	I0929 10:21:49.820513    4864 system_pods.go:89] "coredns-66bc5c9577-bbxfm" [1ab9139a-c748-435d-9625-f2f2192694c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:21:49.820538    4864 system_pods.go:89] "csi-hostpath-attacher-0" [958beda8-5e13-4363-a1d6-b5cd482a5cbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:21:49.820575    4864 system_pods.go:89] "csi-hostpath-resizer-0" [ea0f9b02-073e-4d8d-99ba-cc8335a30d57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:21:49.820601    4864 system_pods.go:89] "csi-hostpathplugin-brrn2" [71f899e6-1ac1-49a8-83d8-af6a5318a1a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:21:49.820619    4864 system_pods.go:89] "etcd-addons-718460" [6d955d0a-3cee-4af8-bb88-e0c4bd1da370] Running
	I0929 10:21:49.820639    4864 system_pods.go:89] "kindnet-9x6tm" [17c04c98-7bd7-41ee-afc0-6c790305ce3c] Running
	I0929 10:21:49.820674    4864 system_pods.go:89] "kube-apiserver-addons-718460" [93b6cef5-bdb2-4b18-8e1b-1ac07be12138] Running
	I0929 10:21:49.820693    4864 system_pods.go:89] "kube-controller-manager-addons-718460" [531ff0d3-8d38-4fef-bfa4-5c395218c74f] Running
	I0929 10:21:49.820713    4864 system_pods.go:89] "kube-ingress-dns-minikube" [582a3d23-f7fc-41d8-8dc7-0cbb957e8900] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:21:49.820748    4864 system_pods.go:89] "kube-proxy-6ln4j" [304d67a8-0fbf-46c7-9a5b-36da788ef137] Running
	I0929 10:21:49.820767    4864 system_pods.go:89] "kube-scheduler-addons-718460" [2fda38af-2d54-4daf-a2b5-35cdf3fb44b6] Running
	I0929 10:21:49.820788    4864 system_pods.go:89] "metrics-server-85b7d694d7-x77fq" [bc238f5c-4424-4932-8e89-26565d9fe1f2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:21:49.820819    4864 system_pods.go:89] "nvidia-device-plugin-daemonset-79j7x" [6a19a5b3-4a20-4806-9b2c-9b12c44d4be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:21:49.820850    4864 system_pods.go:89] "registry-66898fdd98-57h5w" [4eda4165-6bbd-4182-9ac0-af4c38cfe95f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:21:49.820869    4864 system_pods.go:89] "registry-creds-764b6fb674-4fzwl" [0c56d9a6-d1d3-47d5-818d-9c40431e5cf6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:21:49.820901    4864 system_pods.go:89] "registry-proxy-g59x4" [41977be3-9774-4847-9974-3c460f42c342] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:21:49.820924    4864 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cnvrp" [30f17c07-6399-415b-8d2f-067c52f3719b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:49.820944    4864 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r4bnf" [0703cc38-418f-428c-9231-050f4e56adc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:49.820976    4864 system_pods.go:89] "storage-provisioner" [6b0e933e-79ef-4387-9d57-244f74346b34] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:21:49.821009    4864 retry.go:31] will retry after 313.166538ms: missing components: kube-dns
	I0929 10:21:49.919823    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:49.920004    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:49.929210    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:50.132556    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:50.140079    4864 system_pods.go:86] 19 kube-system pods found
	I0929 10:21:50.140116    4864 system_pods.go:89] "coredns-66bc5c9577-bbxfm" [1ab9139a-c748-435d-9625-f2f2192694c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:21:50.140124    4864 system_pods.go:89] "csi-hostpath-attacher-0" [958beda8-5e13-4363-a1d6-b5cd482a5cbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:21:50.140133    4864 system_pods.go:89] "csi-hostpath-resizer-0" [ea0f9b02-073e-4d8d-99ba-cc8335a30d57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:21:50.140140    4864 system_pods.go:89] "csi-hostpathplugin-brrn2" [71f899e6-1ac1-49a8-83d8-af6a5318a1a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:21:50.140148    4864 system_pods.go:89] "etcd-addons-718460" [6d955d0a-3cee-4af8-bb88-e0c4bd1da370] Running
	I0929 10:21:50.140154    4864 system_pods.go:89] "kindnet-9x6tm" [17c04c98-7bd7-41ee-afc0-6c790305ce3c] Running
	I0929 10:21:50.140158    4864 system_pods.go:89] "kube-apiserver-addons-718460" [93b6cef5-bdb2-4b18-8e1b-1ac07be12138] Running
	I0929 10:21:50.140162    4864 system_pods.go:89] "kube-controller-manager-addons-718460" [531ff0d3-8d38-4fef-bfa4-5c395218c74f] Running
	I0929 10:21:50.140172    4864 system_pods.go:89] "kube-ingress-dns-minikube" [582a3d23-f7fc-41d8-8dc7-0cbb957e8900] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:21:50.140176    4864 system_pods.go:89] "kube-proxy-6ln4j" [304d67a8-0fbf-46c7-9a5b-36da788ef137] Running
	I0929 10:21:50.140181    4864 system_pods.go:89] "kube-scheduler-addons-718460" [2fda38af-2d54-4daf-a2b5-35cdf3fb44b6] Running
	I0929 10:21:50.140196    4864 system_pods.go:89] "metrics-server-85b7d694d7-x77fq" [bc238f5c-4424-4932-8e89-26565d9fe1f2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:21:50.140205    4864 system_pods.go:89] "nvidia-device-plugin-daemonset-79j7x" [6a19a5b3-4a20-4806-9b2c-9b12c44d4be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:21:50.140215    4864 system_pods.go:89] "registry-66898fdd98-57h5w" [4eda4165-6bbd-4182-9ac0-af4c38cfe95f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:21:50.140222    4864 system_pods.go:89] "registry-creds-764b6fb674-4fzwl" [0c56d9a6-d1d3-47d5-818d-9c40431e5cf6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:21:50.140255    4864 system_pods.go:89] "registry-proxy-g59x4" [41977be3-9774-4847-9974-3c460f42c342] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:21:50.140263    4864 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cnvrp" [30f17c07-6399-415b-8d2f-067c52f3719b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:50.140272    4864 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r4bnf" [0703cc38-418f-428c-9231-050f4e56adc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:50.140282    4864 system_pods.go:89] "storage-provisioner" [6b0e933e-79ef-4387-9d57-244f74346b34] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:21:50.140298    4864 retry.go:31] will retry after 404.769796ms: missing components: kube-dns
	I0929 10:21:50.350805    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:50.352092    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:50.428207    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:50.550574    4864 system_pods.go:86] 19 kube-system pods found
	I0929 10:21:50.550610    4864 system_pods.go:89] "coredns-66bc5c9577-bbxfm" [1ab9139a-c748-435d-9625-f2f2192694c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:21:50.550619    4864 system_pods.go:89] "csi-hostpath-attacher-0" [958beda8-5e13-4363-a1d6-b5cd482a5cbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:21:50.550627    4864 system_pods.go:89] "csi-hostpath-resizer-0" [ea0f9b02-073e-4d8d-99ba-cc8335a30d57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:21:50.550635    4864 system_pods.go:89] "csi-hostpathplugin-brrn2" [71f899e6-1ac1-49a8-83d8-af6a5318a1a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:21:50.550641    4864 system_pods.go:89] "etcd-addons-718460" [6d955d0a-3cee-4af8-bb88-e0c4bd1da370] Running
	I0929 10:21:50.550647    4864 system_pods.go:89] "kindnet-9x6tm" [17c04c98-7bd7-41ee-afc0-6c790305ce3c] Running
	I0929 10:21:50.550652    4864 system_pods.go:89] "kube-apiserver-addons-718460" [93b6cef5-bdb2-4b18-8e1b-1ac07be12138] Running
	I0929 10:21:50.550657    4864 system_pods.go:89] "kube-controller-manager-addons-718460" [531ff0d3-8d38-4fef-bfa4-5c395218c74f] Running
	I0929 10:21:50.550663    4864 system_pods.go:89] "kube-ingress-dns-minikube" [582a3d23-f7fc-41d8-8dc7-0cbb957e8900] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:21:50.550667    4864 system_pods.go:89] "kube-proxy-6ln4j" [304d67a8-0fbf-46c7-9a5b-36da788ef137] Running
	I0929 10:21:50.550675    4864 system_pods.go:89] "kube-scheduler-addons-718460" [2fda38af-2d54-4daf-a2b5-35cdf3fb44b6] Running
	I0929 10:21:50.550682    4864 system_pods.go:89] "metrics-server-85b7d694d7-x77fq" [bc238f5c-4424-4932-8e89-26565d9fe1f2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:21:50.550688    4864 system_pods.go:89] "nvidia-device-plugin-daemonset-79j7x" [6a19a5b3-4a20-4806-9b2c-9b12c44d4be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:21:50.550700    4864 system_pods.go:89] "registry-66898fdd98-57h5w" [4eda4165-6bbd-4182-9ac0-af4c38cfe95f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:21:50.550706    4864 system_pods.go:89] "registry-creds-764b6fb674-4fzwl" [0c56d9a6-d1d3-47d5-818d-9c40431e5cf6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:21:50.550713    4864 system_pods.go:89] "registry-proxy-g59x4" [41977be3-9774-4847-9974-3c460f42c342] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:21:50.550723    4864 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cnvrp" [30f17c07-6399-415b-8d2f-067c52f3719b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:50.550730    4864 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r4bnf" [0703cc38-418f-428c-9231-050f4e56adc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:50.550735    4864 system_pods.go:89] "storage-provisioner" [6b0e933e-79ef-4387-9d57-244f74346b34] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:21:50.550750    4864 retry.go:31] will retry after 550.422166ms: missing components: kube-dns
	I0929 10:21:50.632664    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:50.851688    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:50.852041    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:50.936369    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:51.110371    4864 system_pods.go:86] 19 kube-system pods found
	I0929 10:21:51.110455    4864 system_pods.go:89] "coredns-66bc5c9577-bbxfm" [1ab9139a-c748-435d-9625-f2f2192694c3] Running
	I0929 10:21:51.110483    4864 system_pods.go:89] "csi-hostpath-attacher-0" [958beda8-5e13-4363-a1d6-b5cd482a5cbe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:21:51.110526    4864 system_pods.go:89] "csi-hostpath-resizer-0" [ea0f9b02-073e-4d8d-99ba-cc8335a30d57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:21:51.110555    4864 system_pods.go:89] "csi-hostpathplugin-brrn2" [71f899e6-1ac1-49a8-83d8-af6a5318a1a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:21:51.110582    4864 system_pods.go:89] "etcd-addons-718460" [6d955d0a-3cee-4af8-bb88-e0c4bd1da370] Running
	I0929 10:21:51.110628    4864 system_pods.go:89] "kindnet-9x6tm" [17c04c98-7bd7-41ee-afc0-6c790305ce3c] Running
	I0929 10:21:51.110655    4864 system_pods.go:89] "kube-apiserver-addons-718460" [93b6cef5-bdb2-4b18-8e1b-1ac07be12138] Running
	I0929 10:21:51.110672    4864 system_pods.go:89] "kube-controller-manager-addons-718460" [531ff0d3-8d38-4fef-bfa4-5c395218c74f] Running
	I0929 10:21:51.110704    4864 system_pods.go:89] "kube-ingress-dns-minikube" [582a3d23-f7fc-41d8-8dc7-0cbb957e8900] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:21:51.110726    4864 system_pods.go:89] "kube-proxy-6ln4j" [304d67a8-0fbf-46c7-9a5b-36da788ef137] Running
	I0929 10:21:51.110745    4864 system_pods.go:89] "kube-scheduler-addons-718460" [2fda38af-2d54-4daf-a2b5-35cdf3fb44b6] Running
	I0929 10:21:51.110765    4864 system_pods.go:89] "metrics-server-85b7d694d7-x77fq" [bc238f5c-4424-4932-8e89-26565d9fe1f2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:21:51.110796    4864 system_pods.go:89] "nvidia-device-plugin-daemonset-79j7x" [6a19a5b3-4a20-4806-9b2c-9b12c44d4be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:21:51.110822    4864 system_pods.go:89] "registry-66898fdd98-57h5w" [4eda4165-6bbd-4182-9ac0-af4c38cfe95f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:21:51.110843    4864 system_pods.go:89] "registry-creds-764b6fb674-4fzwl" [0c56d9a6-d1d3-47d5-818d-9c40431e5cf6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:21:51.110879    4864 system_pods.go:89] "registry-proxy-g59x4" [41977be3-9774-4847-9974-3c460f42c342] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:21:51.110903    4864 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cnvrp" [30f17c07-6399-415b-8d2f-067c52f3719b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:51.110924    4864 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r4bnf" [0703cc38-418f-428c-9231-050f4e56adc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:51.110957    4864 system_pods.go:89] "storage-provisioner" [6b0e933e-79ef-4387-9d57-244f74346b34] Running
	I0929 10:21:51.110983    4864 system_pods.go:126] duration metric: took 1.618939918s to wait for k8s-apps to be running ...
	I0929 10:21:51.111004    4864 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:21:51.111094    4864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:21:51.133499    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:51.139593    4864 system_svc.go:56] duration metric: took 28.579688ms WaitForService to wait for kubelet
	I0929 10:21:51.139677    4864 kubeadm.go:578] duration metric: took 46.885273896s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:21:51.139713    4864 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:21:51.145238    4864 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 10:21:51.145387    4864 node_conditions.go:123] node cpu capacity is 2
	I0929 10:21:51.145417    4864 node_conditions.go:105] duration metric: took 5.671582ms to run NodePressure ...
	I0929 10:21:51.145459    4864 start.go:241] waiting for startup goroutines ...
	I0929 10:21:51.359960    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:51.360056    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:51.428909    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:51.634021    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:51.853862    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:51.854233    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:51.929726    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:52.132014    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:52.351102    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:52.351618    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:52.428546    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:52.633743    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:52.853802    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:52.854485    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:52.928587    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:53.132228    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:53.352886    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:53.354262    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:53.428300    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:53.632506    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:53.853128    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:53.853470    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:53.928644    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:54.132153    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:54.353229    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:54.353722    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:54.428768    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:54.632947    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:54.850222    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:54.850987    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:54.929202    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:55.131688    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:55.351051    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:55.352917    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:55.436880    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:55.637027    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:55.850942    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:55.851409    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:55.951233    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:56.132431    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:56.350793    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:56.351541    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:56.431190    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:56.633818    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:56.829276    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:56.853241    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:56.853800    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:56.933699    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:57.131749    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:57.351884    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:57.352235    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:57.433419    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:57.634647    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:57.874741    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:57.874827    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:57.944088    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:58.082897    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.253579238s)
	W0929 10:21:58.082936    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:58.082958    4864 retry.go:31] will retry after 28.629892009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:58.136181    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:58.363740    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:58.364241    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:58.436737    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:58.632273    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:58.851518    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:58.852004    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:58.952480    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:59.131735    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:59.350234    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:59.350316    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:59.428288    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:59.632181    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:59.851409    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:59.851618    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:59.928605    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:00.137545    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:00.352307    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:00.352886    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:00.429170    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:00.632935    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:00.850887    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:00.851079    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:00.927944    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:01.137487    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:01.351931    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:01.352347    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:01.428785    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:01.632788    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:01.852208    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:01.852597    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:01.928733    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:02.132836    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:02.352322    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:02.352420    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:02.430347    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:02.633511    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:02.853243    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:02.855768    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:02.929797    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:03.132891    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:03.354095    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:03.354490    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:03.428672    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:03.633304    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:03.851250    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:03.851408    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:03.951997    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:04.135119    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:04.351377    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:04.351488    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:04.428354    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:04.631908    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:04.850640    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:04.850731    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:04.929062    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:05.140577    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:05.351720    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:05.352211    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:05.428489    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:05.631811    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:05.852376    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:05.852950    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:05.929335    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:06.133496    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:06.352403    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:06.352688    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:06.428759    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:06.631750    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:06.853209    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:06.853247    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:06.928061    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:07.131638    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:07.350576    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:07.351230    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:07.428103    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:07.634167    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:07.852515    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:07.852789    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:07.930308    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:08.132976    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:08.352584    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:08.353067    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:08.428416    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:08.635998    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:08.851652    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:08.851795    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:08.929292    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:09.133954    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:09.352284    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:09.353134    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:09.430404    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:09.633867    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:09.853616    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:09.853988    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:09.929674    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:10.140994    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:10.357895    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:10.358023    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:10.439976    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:10.632536    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:10.857717    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:10.858042    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:10.929035    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:11.146757    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:11.352738    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:11.352857    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:11.429504    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:11.634515    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:11.853024    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:11.853536    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:11.929456    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:12.142215    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:12.357815    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:12.357941    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:12.434270    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:12.647001    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:12.859928    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:12.860057    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:12.937735    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:13.132589    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:13.352666    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:13.353831    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:13.430039    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:13.633566    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:13.850812    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:13.851927    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:13.929183    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:14.132006    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:14.352188    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:14.352600    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:14.428976    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:14.633849    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:14.850057    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:14.851823    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:14.929686    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:15.132680    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:15.360642    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:15.363410    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:15.464781    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:15.632162    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:15.850974    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:15.851023    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:15.928951    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:16.132228    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:16.354111    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:16.355752    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:16.429175    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:16.632067    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:16.851378    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:16.851588    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:16.928345    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:17.138947    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:17.350786    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:17.351953    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:17.429033    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:17.633645    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:17.851050    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:17.851797    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:17.929632    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:18.132681    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:18.389215    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:18.389435    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:18.475347    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:18.632771    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:18.850410    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:18.850552    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:18.933761    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:19.132263    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:19.354105    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:19.354286    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:19.429703    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:19.635596    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:19.850219    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:19.851206    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:19.929430    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:20.132270    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:20.351047    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:20.351318    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:20.428302    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:20.631511    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:20.851229    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:20.851413    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:20.928368    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:21.132243    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:21.350779    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:21.350886    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:21.429544    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:21.632388    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:21.852144    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:21.852357    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:21.929253    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:22.131648    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:22.367475    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:22.368372    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:22.465005    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:22.637726    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:22.851291    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:22.851431    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:22.928430    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:23.131655    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:23.350734    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:23.351364    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:23.428462    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:23.631905    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:23.851931    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:23.852211    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:23.929149    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:24.131925    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:24.352295    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:24.352428    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:24.452389    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:24.631819    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:24.850423    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:24.852006    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:24.930823    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:25.131998    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:25.350966    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:25.351774    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:25.428775    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:25.633143    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:25.852477    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:25.859662    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:25.928650    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:26.132581    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:26.397396    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:26.403759    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:26.468505    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:26.632327    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:26.713613    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:22:26.860509    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:26.861719    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:26.929696    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:27.132776    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:27.361357    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:27.362325    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:27.429522    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:27.633158    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:27.853215    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:27.853248    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:27.933245    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:28.014193    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.30054244s)
	W0929 10:22:28.014238    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:22:28.014257    4864 retry.go:31] will retry after 21.249329091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:22:28.132155    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:28.351890    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:28.352370    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:28.428793    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:28.633448    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:28.851546    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:28.851716    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:28.928452    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:29.132461    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:29.351792    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:29.353007    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:29.430618    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:29.632424    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:29.851891    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:29.853274    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:29.928530    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:30.133886    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:30.350998    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:30.351290    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:30.428227    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:30.631293    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:30.851452    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:30.853541    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:30.928753    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:31.132269    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:31.352704    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:31.352847    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:31.429012    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:31.633563    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:31.854124    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:31.854342    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:31.952165    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:32.132184    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:32.350381    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:32.350964    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:32.428923    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:32.647740    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:32.851941    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:32.852259    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:32.931607    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:33.132331    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:33.350746    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:33.351030    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:33.428965    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:33.631105    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:33.852217    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:33.852869    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:33.929771    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:34.132349    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:34.351013    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:34.351663    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:34.429050    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:34.634784    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:34.852024    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:34.852423    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:34.928771    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:35.133192    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:35.350269    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:35.350980    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:35.429073    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:35.632175    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:35.852022    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:35.853159    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:35.928234    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:36.131332    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:36.350820    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:36.351012    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:36.428861    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:36.632865    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:36.851602    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:36.852694    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:36.928499    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:37.133655    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:37.353312    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:37.365922    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:37.429009    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:37.631997    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:37.850808    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:37.851539    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:37.929102    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:38.131278    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:38.352177    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:38.353108    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:38.427917    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:38.633584    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:38.850686    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:38.850848    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:38.928320    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:39.131318    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:39.351079    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:39.351235    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:22:39.428234    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:39.631988    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:39.852858    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:39.853252    4864 kapi.go:107] duration metric: took 1m29.506535059s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:22:39.928290    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:40.132638    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:40.350665    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:40.428462    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:40.632141    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:40.852484    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:40.928312    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:41.132441    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:41.350958    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:41.430716    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:41.634586    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:41.850018    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:41.928724    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:42.136617    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:42.351164    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:42.428700    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:42.632163    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:42.850283    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:42.928085    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:43.131509    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:43.350764    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:43.428788    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:43.632087    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:43.850197    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:43.928017    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:44.132297    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:44.351229    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:44.428387    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:44.632426    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:44.851071    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:44.928630    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:45.132609    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:45.393048    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:45.428793    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:45.633081    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:45.850589    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:45.929254    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:46.132042    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:46.350343    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:46.428201    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:46.632766    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:46.852262    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:46.928452    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:47.131933    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:47.351169    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:47.429908    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:47.633289    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:47.852573    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:47.929636    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:48.132952    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:48.350223    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:48.428263    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:48.640002    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:48.850888    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:48.933467    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:49.140458    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:49.263774    4864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:22:49.350754    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:49.438632    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:49.648323    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:49.852434    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:49.952430    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:50.133956    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:50.350784    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:50.429615    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:50.633047    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:50.851475    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:50.936140    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:50.968605    4864 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.704750681s)
	W0929 10:22:50.968685    4864 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 10:22:50.968812    4864 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:22:51.132727    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:51.351154    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:51.429595    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:51.632008    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:51.853089    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:51.928529    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:52.132269    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:52.354180    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:52.430188    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:52.631720    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:52.851328    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:52.952075    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:53.134860    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:53.349974    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:53.430018    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:53.637362    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:53.851723    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:53.932918    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:54.134048    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:54.350840    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:54.429106    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:54.631795    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:54.850670    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:54.929813    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:55.133049    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:55.350300    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:55.429098    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:55.636598    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:55.852962    4864 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:22:55.929729    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:56.132131    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:56.350952    4864 kapi.go:107] duration metric: took 1m46.004256153s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:22:56.429364    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:56.632476    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:56.953984    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:57.136200    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:57.433549    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:57.631841    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:57.929232    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:58.131627    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:58.429568    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:58.632237    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:58.928834    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:59.134000    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:59.428779    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:59.638297    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:59.928605    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:23:00.139410    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:00.429831    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:23:00.632372    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:00.929229    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:23:01.133301    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:01.428989    4864 kapi.go:107] duration metric: took 1m44.503835138s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:23:01.432059    4864 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-718460 cluster.
	I0929 10:23:01.435208    4864 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:23:01.438168    4864 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:23:01.631533    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:02.132429    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:02.632387    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:03.132498    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:03.632719    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:04.136176    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:04.632289    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:05.132988    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:05.632294    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:06.132221    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:06.632543    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:07.132670    4864 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:23:07.631821    4864 kapi.go:107] duration metric: took 1m56.503675252s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:23:07.635178    4864 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, registry-creds, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 10:23:07.638034    4864 addons.go:514] duration metric: took 2m3.383112545s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner registry-creds cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 10:23:07.638118    4864 start.go:246] waiting for cluster config update ...
	I0929 10:23:07.638164    4864 start.go:255] writing updated cluster config ...
	I0929 10:23:07.639165    4864 ssh_runner.go:195] Run: rm -f paused
	I0929 10:23:07.642480    4864 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:23:07.645911    4864 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bbxfm" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:07.650467    4864 pod_ready.go:94] pod "coredns-66bc5c9577-bbxfm" is "Ready"
	I0929 10:23:07.650500    4864 pod_ready.go:86] duration metric: took 4.561221ms for pod "coredns-66bc5c9577-bbxfm" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:07.653028    4864 pod_ready.go:83] waiting for pod "etcd-addons-718460" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:07.657536    4864 pod_ready.go:94] pod "etcd-addons-718460" is "Ready"
	I0929 10:23:07.657562    4864 pod_ready.go:86] duration metric: took 4.509734ms for pod "etcd-addons-718460" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:07.659745    4864 pod_ready.go:83] waiting for pod "kube-apiserver-addons-718460" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:07.664673    4864 pod_ready.go:94] pod "kube-apiserver-addons-718460" is "Ready"
	I0929 10:23:07.664699    4864 pod_ready.go:86] duration metric: took 4.926837ms for pod "kube-apiserver-addons-718460" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:07.667257    4864 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-718460" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:08.046673    4864 pod_ready.go:94] pod "kube-controller-manager-addons-718460" is "Ready"
	I0929 10:23:08.046704    4864 pod_ready.go:86] duration metric: took 379.420763ms for pod "kube-controller-manager-addons-718460" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:08.252679    4864 pod_ready.go:83] waiting for pod "kube-proxy-6ln4j" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:08.646616    4864 pod_ready.go:94] pod "kube-proxy-6ln4j" is "Ready"
	I0929 10:23:08.646687    4864 pod_ready.go:86] duration metric: took 393.937368ms for pod "kube-proxy-6ln4j" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:08.847487    4864 pod_ready.go:83] waiting for pod "kube-scheduler-addons-718460" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:09.246727    4864 pod_ready.go:94] pod "kube-scheduler-addons-718460" is "Ready"
	I0929 10:23:09.246758    4864 pod_ready.go:86] duration metric: took 399.205707ms for pod "kube-scheduler-addons-718460" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:23:09.246772    4864 pod_ready.go:40] duration metric: took 1.604260528s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:23:09.642681    4864 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 10:23:09.646129    4864 out.go:179] * Done! kubectl is now configured to use "addons-718460" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 10:25:01 addons-718460 crio[985]: time="2025-09-29 10:25:01.234296128Z" level=info msg="Removed pod sandbox: b4b0c3c9bb970cf4915117d6585ec6dddd35955b55b3286f1619e1c1b82c37cc" id=30cf5c34-8658-41d0-8062-462762d7dd57 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.647870460Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-ffqdd/POD" id=09d37990-5c20-4810-896a-0d86074bad68 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.647934802Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.678789721Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-ffqdd Namespace:default ID:60a3fcdd57f3ad7f50d9ffe30a8b2eedd2c3ba8ff2f3ad05391857e8db83c929 UID:2e0eba06-75fd-4d42-bce2-63bb642a8097 NetNS:/var/run/netns/0dbf3f54-5dee-4709-bd5b-20c9790a0865 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.678998533Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-ffqdd to CNI network \"kindnet\" (type=ptp)"
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.692925760Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-ffqdd Namespace:default ID:60a3fcdd57f3ad7f50d9ffe30a8b2eedd2c3ba8ff2f3ad05391857e8db83c929 UID:2e0eba06-75fd-4d42-bce2-63bb642a8097 NetNS:/var/run/netns/0dbf3f54-5dee-4709-bd5b-20c9790a0865 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.693081931Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-ffqdd for CNI network kindnet (type=ptp)"
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.698567779Z" level=info msg="Ran pod sandbox 60a3fcdd57f3ad7f50d9ffe30a8b2eedd2c3ba8ff2f3ad05391857e8db83c929 with infra container: default/hello-world-app-5d498dc89-ffqdd/POD" id=09d37990-5c20-4810-896a-0d86074bad68 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.700491905Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=94663d5a-c0b1-48c6-9f51-4e643544618c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.700718925Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=94663d5a-c0b1-48c6-9f51-4e643544618c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.701509607Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=1449b362-4366-4421-a922-e97eba610c13 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.706158460Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 29 10:26:49 addons-718460 crio[985]: time="2025-09-29 10:26:49.949971246Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.623537482Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=1449b362-4366-4421-a922-e97eba610c13 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.624267514Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=bbccef56-088c-4031-bee9-13228b9a2d15 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.624963386Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=bbccef56-088c-4031-bee9-13228b9a2d15 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.628370338Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3d46e94b-ffbe-4ad8-b21a-62e93d3ed79d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.629108281Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3d46e94b-ffbe-4ad8-b21a-62e93d3ed79d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.635686376Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-ffqdd/hello-world-app" id=bc2e773f-b040-47c1-85e3-e9718f708f11 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.635811472Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.658845789Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/84d394aa81c1f372cbec58e6c2db90dbbda5b411d600e286b6aa9e8b6431cbf2/merged/etc/passwd: no such file or directory"
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.659283902Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/84d394aa81c1f372cbec58e6c2db90dbbda5b411d600e286b6aa9e8b6431cbf2/merged/etc/group: no such file or directory"
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.775259581Z" level=info msg="Created container 08a401458df835cc542c41fb61c0726c82d37e8aced82b945cbacab726022e33: default/hello-world-app-5d498dc89-ffqdd/hello-world-app" id=bc2e773f-b040-47c1-85e3-e9718f708f11 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.776059781Z" level=info msg="Starting container: 08a401458df835cc542c41fb61c0726c82d37e8aced82b945cbacab726022e33" id=68e51239-c64a-4f80-94a3-7bb52ca1f95b name=/runtime.v1.RuntimeService/StartContainer
	Sep 29 10:26:50 addons-718460 crio[985]: time="2025-09-29 10:26:50.792935627Z" level=info msg="Started container" PID=9666 containerID=08a401458df835cc542c41fb61c0726c82d37e8aced82b945cbacab726022e33 description=default/hello-world-app-5d498dc89-ffqdd/hello-world-app id=68e51239-c64a-4f80-94a3-7bb52ca1f95b name=/runtime.v1.RuntimeService/StartContainer sandboxID=60a3fcdd57f3ad7f50d9ffe30a8b2eedd2c3ba8ff2f3ad05391857e8db83c929
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	08a401458df83       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   60a3fcdd57f3a       hello-world-app-5d498dc89-ffqdd
	063ec21eafbda       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   fc082dd4819a8       nginx
	069523e9738c5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   979e8715f19ff       busybox
	894c2237526b7       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago            Running             controller                0                   e65117b027076       ingress-nginx-controller-9cc49f96f-shrv5
	738d0d7ceecb6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            4 minutes ago            Running             gadget                    0                   a0ce9a85e3b1c       gadget-c9wmw
	2f0f2fb8e8bdb       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             4 minutes ago            Running             local-path-provisioner    0                   2a8d18c1b8248       local-path-provisioner-648f6765c9-q2fq2
	7de921e03478e       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                             4 minutes ago            Exited              patch                     2                   b22672108c7bb       ingress-nginx-admission-patch-k2kng
	7f21d868140c9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              create                    0                   58411e1da14ba       ingress-nginx-admission-create-bkcrr
	71c04e85d5c4a       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958               4 minutes ago            Running             minikube-ingress-dns      0                   6d8908ea20b77       kube-ingress-dns-minikube
	271807be28b6f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner       0                   eaf5978fae401       storage-provisioner
	2620f479e5f65       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                             5 minutes ago            Running             coredns                   0                   156f9a74ccbb8       coredns-66bc5c9577-bbxfm
	c521e86b4c7b3       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                             5 minutes ago            Running             kube-proxy                0                   e44bc45e11187       kube-proxy-6ln4j
	2dfa05bb4508c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                             5 minutes ago            Running             kindnet-cni               0                   94a1bafa97019       kindnet-9x6tm
	aca5392638764       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                             5 minutes ago            Running             kube-controller-manager   0                   cc63ce8df199a       kube-controller-manager-addons-718460
	b3bf482961af3       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                                             5 minutes ago            Running             kube-apiserver            0                   f1b20fb4b7ed8       kube-apiserver-addons-718460
	e27e6f1ac2e7e       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                             5 minutes ago            Running             kube-scheduler            0                   9c80341fb8859       kube-scheduler-addons-718460
	b9a8339207fe3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                             5 minutes ago            Running             etcd                      0                   629e4b080a799       etcd-addons-718460
	
	
	==> coredns [2620f479e5f655bdcd77a82cb5d1223a9fa7b62a088e514733ac8e18210e666c] <==
	[INFO] 10.244.0.14:47416 - 13038 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002801719s
	[INFO] 10.244.0.14:47416 - 63930 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000139395s
	[INFO] 10.244.0.14:47416 - 39490 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000108602s
	[INFO] 10.244.0.14:43313 - 45551 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000242163s
	[INFO] 10.244.0.14:43313 - 45756 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101824s
	[INFO] 10.244.0.14:43280 - 46437 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010189s
	[INFO] 10.244.0.14:43280 - 46617 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100676s
	[INFO] 10.244.0.14:48132 - 42315 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092094s
	[INFO] 10.244.0.14:48132 - 42135 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081s
	[INFO] 10.244.0.14:35763 - 24600 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001286654s
	[INFO] 10.244.0.14:35763 - 25028 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001369672s
	[INFO] 10.244.0.14:50076 - 18305 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093693s
	[INFO] 10.244.0.14:50076 - 18149 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175555s
	[INFO] 10.244.0.21:57023 - 20947 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164272s
	[INFO] 10.244.0.21:54815 - 40997 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000365082s
	[INFO] 10.244.0.21:39401 - 2554 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179263s
	[INFO] 10.244.0.21:35114 - 29082 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108955s
	[INFO] 10.244.0.21:34792 - 62121 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.003259682s
	[INFO] 10.244.0.21:43827 - 31622 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000176277s
	[INFO] 10.244.0.21:54033 - 26922 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001849929s
	[INFO] 10.244.0.21:48786 - 51414 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003549886s
	[INFO] 10.244.0.21:48789 - 3771 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000965042s
	[INFO] 10.244.0.21:46949 - 25439 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004777513s
	[INFO] 10.244.0.23:41528 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000262692s
	[INFO] 10.244.0.23:55614 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138902s
	
	
	==> describe nodes <==
	Name:               addons-718460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-718460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=addons-718460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_20_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-718460
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:20:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-718460
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:26:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:24:34 +0000   Mon, 29 Sep 2025 10:20:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:24:34 +0000   Mon, 29 Sep 2025 10:20:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:24:34 +0000   Mon, 29 Sep 2025 10:20:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:24:34 +0000   Mon, 29 Sep 2025 10:21:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-718460
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 464a65cdb91e45b58dbc386f56799557
	  System UUID:                62ee95fc-cfb3-4dd6-bcc3-a94ee3a6d548
	  Boot ID:                    94bae1c7-2aab-4023-97c8-d86f41852a19
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  default                     hello-world-app-5d498dc89-ffqdd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-c9wmw                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-shrv5    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m41s
	  kube-system                 coredns-66bc5c9577-bbxfm                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m47s
	  kube-system                 etcd-addons-718460                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m52s
	  kube-system                 kindnet-9x6tm                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m47s
	  kube-system                 kube-apiserver-addons-718460                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 kube-controller-manager-addons-718460       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-proxy-6ln4j                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-scheduler-addons-718460                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  local-path-storage          local-path-provisioner-648f6765c9-q2fq2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age              From             Message
	  ----     ------                   ----             ----             -------
	  Normal   Starting                 5m41s            kube-proxy       
	  Normal   NodeHasSufficientMemory  6m (x8 over 6m)  kubelet          Node addons-718460 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m (x8 over 6m)  kubelet          Node addons-718460 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m (x8 over 6m)  kubelet          Node addons-718460 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m53s            kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m53s            kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m52s            kubelet          Node addons-718460 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m52s            kubelet          Node addons-718460 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m52s            kubelet          Node addons-718460 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m48s            node-controller  Node addons-718460 event: Registered Node addons-718460 in Controller
	  Normal   NodeReady                5m4s             kubelet          Node addons-718460 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep29 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015081] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507046] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032504] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.738127] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.628888] kauditd_printk_skb: 36 callbacks suppressed
	[Sep29 10:24] hrtimer: interrupt took 16266417 ns
	
	
	==> etcd [b9a8339207fe3e1405c0f692346e4aaba316070c50fe37e05c62a78c1135ce44] <==
	{"level":"info","ts":"2025-09-29T10:21:08.218780Z","caller":"traceutil/trace.go:172","msg":"trace[1535872472] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:397; }","duration":"242.858552ms","start":"2025-09-29T10:21:07.975915Z","end":"2025-09-29T10:21:08.218773Z","steps":["trace[1535872472] 'agreement among raft nodes before linearized reading'  (duration: 132.417725ms)","trace[1535872472] 'range keys from in-memory index tree'  (duration: 110.403782ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:08.218897Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"247.37912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:08.218918Z","caller":"traceutil/trace.go:172","msg":"trace[2005936873] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:0; response_revision:397; }","duration":"247.407583ms","start":"2025-09-29T10:21:07.971503Z","end":"2025-09-29T10:21:08.218911Z","steps":["trace[2005936873] 'agreement among raft nodes before linearized reading'  (duration: 136.838275ms)","trace[2005936873] 'range keys from in-memory index tree'  (duration: 110.530433ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:08.218998Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"344.716738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:08.219026Z","caller":"traceutil/trace.go:172","msg":"trace[175174962] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:397; }","duration":"344.744733ms","start":"2025-09-29T10:21:07.874272Z","end":"2025-09-29T10:21:08.219017Z","steps":["trace[175174962] 'agreement among raft nodes before linearized reading'  (duration: 234.073929ms)","trace[175174962] 'range keys from in-memory index tree'  (duration: 110.63672ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:08.219054Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T10:21:07.874264Z","time spent":"344.775436ms","remote":"127.0.0.1:35648","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 "}
	{"level":"warn","ts":"2025-09-29T10:21:08.235826Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"361.876595ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:08.235897Z","caller":"traceutil/trace.go:172","msg":"trace[1380959463] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:0; response_revision:397; }","duration":"361.959112ms","start":"2025-09-29T10:21:07.873919Z","end":"2025-09-29T10:21:08.235878Z","steps":["trace[1380959463] 'agreement among raft nodes before linearized reading'  (duration: 234.433381ms)","trace[1380959463] 'range keys from in-memory index tree'  (duration: 127.390078ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:08.235940Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T10:21:07.873914Z","time spent":"362.014832ms","remote":"127.0.0.1:35500","response type":"/etcdserverpb.KV/Range","request count":0,"request size":31,"response count":0,"response size":29,"request content":"key:\"/registry/namespaces/gadget\" limit:1 "}
	{"level":"warn","ts":"2025-09-29T10:21:08.236066Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"259.943891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4338"}
	{"level":"info","ts":"2025-09-29T10:21:08.236095Z","caller":"traceutil/trace.go:172","msg":"trace[642673075] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:397; }","duration":"259.967087ms","start":"2025-09-29T10:21:07.976117Z","end":"2025-09-29T10:21:08.236084Z","steps":["trace[642673075] 'agreement among raft nodes before linearized reading'  (duration: 150.029906ms)","trace[642673075] 'range keys from in-memory index tree'  (duration: 109.56895ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T10:21:08.236472Z","caller":"traceutil/trace.go:172","msg":"trace[2117685371] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"110.09854ms","start":"2025-09-29T10:21:08.126354Z","end":"2025-09-29T10:21:08.236452Z","steps":["trace[2117685371] 'process raft request'  (duration: 46.857982ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:08.287617Z","caller":"traceutil/trace.go:172","msg":"trace[1010258089] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:406; }","duration":"114.371812ms","start":"2025-09-29T10:21:08.173228Z","end":"2025-09-29T10:21:08.287600Z","steps":["trace[1010258089] 'read index received'  (duration: 114.367422ms)","trace[1010258089] 'applied index is now lower than readState.Index'  (duration: 3.75µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:08.321811Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"195.627192ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-09-29T10:21:08.323115Z","caller":"traceutil/trace.go:172","msg":"trace[835233654] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:398; }","duration":"196.949243ms","start":"2025-09-29T10:21:08.126144Z","end":"2025-09-29T10:21:08.323093Z","steps":["trace[835233654] 'agreement among raft nodes before linearized reading'  (duration: 169.123803ms)","trace[835233654] 'range keys from in-memory index tree'  (duration: 26.400622ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:08.323362Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.325063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:08.340076Z","caller":"traceutil/trace.go:172","msg":"trace[659637779] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:398; }","duration":"214.029117ms","start":"2025-09-29T10:21:08.126020Z","end":"2025-09-29T10:21:08.340049Z","steps":["trace[659637779] 'agreement among raft nodes before linearized reading'  (duration: 169.264387ms)","trace[659637779] 'range keys from in-memory index tree'  (duration: 28.04914ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T10:21:08.340786Z","caller":"traceutil/trace.go:172","msg":"trace[1708601259] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"203.465579ms","start":"2025-09-29T10:21:08.137309Z","end":"2025-09-29T10:21:08.340775Z","steps":["trace[1708601259] 'process raft request'  (duration: 158.001695ms)","trace[1708601259] 'compare'  (duration: 45.125142ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T10:21:08.341054Z","caller":"traceutil/trace.go:172","msg":"trace[335698099] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"203.304877ms","start":"2025-09-29T10:21:08.137729Z","end":"2025-09-29T10:21:08.341034Z","steps":["trace[335698099] 'process raft request'  (duration: 202.825732ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:21:11.056114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:21:11.142689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:21:33.185914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:21:33.194574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:21:33.216147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:21:33.230678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50850","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:26:51 up 9 min,  0 users,  load average: 0.84, 1.26, 0.73
	Linux addons-718460 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2dfa05bb4508c7cc9e363c167ce0bf06808be483ffecbabf11516034d96d4411] <==
	I0929 10:24:47.548499       1 main.go:301] handling current node
	I0929 10:24:57.548451       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:24:57.548486       1 main.go:301] handling current node
	I0929 10:25:07.548403       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:25:07.548501       1 main.go:301] handling current node
	I0929 10:25:17.548517       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:25:17.548552       1 main.go:301] handling current node
	I0929 10:25:27.548585       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:25:27.548616       1 main.go:301] handling current node
	I0929 10:25:37.548444       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:25:37.548528       1 main.go:301] handling current node
	I0929 10:25:47.548474       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:25:47.548536       1 main.go:301] handling current node
	I0929 10:25:57.548314       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:25:57.548346       1 main.go:301] handling current node
	I0929 10:26:07.547756       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:26:07.547868       1 main.go:301] handling current node
	I0929 10:26:17.548458       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:26:17.548492       1 main.go:301] handling current node
	I0929 10:26:27.547900       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:26:27.547938       1 main.go:301] handling current node
	I0929 10:26:37.548431       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:26:37.548464       1 main.go:301] handling current node
	I0929 10:26:47.547985       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:26:47.548020       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b3bf482961af38d8e5c5101db9a35a4455b50d0f9835de28d3472bfe3326f818] <==
	I0929 10:23:19.885186       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0929 10:23:20.751785       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49312: use of closed network connection
	E0929 10:23:21.019849       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49340: use of closed network connection
	I0929 10:23:54.837379       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.188.19"}
	I0929 10:24:13.251738       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0929 10:24:22.356843       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:24:23.385478       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 10:24:26.355641       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 10:24:26.822877       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.244.245"}
	I0929 10:24:27.458900       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:24:27.459099       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:24:27.488341       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:24:27.489041       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:24:27.526276       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:24:27.526321       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0929 10:24:27.558672       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	I0929 10:24:27.562262       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:24:27.562373       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:24:28.217932       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 10:24:28.526964       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 10:24:28.563009       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0929 10:24:28.662432       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0929 10:25:30.007736       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:25:33.914235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:26:49.530353       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.95.227"}
	
	
	==> kube-controller-manager [aca5392638764348c0b3b2edb2f901c4a55428163f6adecf59fd8137956c4392] <==
	E0929 10:24:38.297883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:24:43.687380       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:24:43.688570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:24:45.832019       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:24:45.833247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:24:48.441944       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:24:48.443004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:24:59.268597       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:24:59.271313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:25:07.515430       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:25:07.516495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:25:10.309746       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:25:10.310892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:25:34.468700       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:25:34.469810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:25:35.406038       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:25:35.407383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:25:44.231032       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:25:44.232175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:26:24.904271       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:26:24.905405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:26:30.152982       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:26:30.154180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:26:35.378139       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:26:35.379293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [c521e86b4c7b31710fb35a80a30e58c95fb23ffbb6c868a725e0614da8645aa7] <==
	I0929 10:21:09.464434       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:21:09.860153       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:21:09.960977       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:21:09.961024       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:21:09.961106       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:21:10.093477       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:21:10.093618       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:21:10.098888       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:21:10.108340       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:21:10.108431       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:21:10.112456       1 config.go:200] "Starting service config controller"
	I0929 10:21:10.112486       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:21:10.158300       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:21:10.160482       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:21:10.160604       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:21:10.160635       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:21:10.161341       1 config.go:309] "Starting node config controller"
	I0929 10:21:10.161416       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:21:10.161448       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:21:10.212709       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:21:10.261798       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 10:21:10.277027       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e27e6f1ac2e7e07418a443b984aeb2271a2f304f3a38049c114014ffac8d279f] <==
	E0929 10:20:56.256170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:20:56.256237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:20:56.256299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:20:56.256344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:20:56.256403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:20:56.256470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:20:56.256528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:20:56.256920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:20:56.257105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:20:56.257164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:20:56.257306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:20:56.257363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:20:56.257418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:20:56.257450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:20:56.257764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:20:56.263494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0929 10:20:57.146060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:20:57.187448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:20:57.320654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:20:57.338297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:20:57.353852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0929 10:20:57.371882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:20:57.385369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:20:57.429286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I0929 10:21:00.350407       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:25:59 addons-718460 kubelet[1521]: E0929 10:25:59.098112    1521 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0d7979f0400eb8d91207a24ec5ebd1001c1104c0b2fab0b07ac8b0d1a4f1eca1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0d7979f0400eb8d91207a24ec5ebd1001c1104c0b2fab0b07ac8b0d1a4f1eca1/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 10:25:59 addons-718460 kubelet[1521]: E0929 10:25:59.098162    1521 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f265559e003257ebd2bdc8296918ef0d3509e11caf2298e891c6834e3777a575/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f265559e003257ebd2bdc8296918ef0d3509e11caf2298e891c6834e3777a575/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 10:25:59 addons-718460 kubelet[1521]: E0929 10:25:59.100385    1521 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0efea5d4dc12d8e49cd3d6eb5c2304ce468239319eee158bcff8e76d9785592c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0efea5d4dc12d8e49cd3d6eb5c2304ce468239319eee158bcff8e76d9785592c/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 10:25:59 addons-718460 kubelet[1521]: E0929 10:25:59.136155    1521 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e8c7537fd075e4246f3234a77e63f1f159e3b199e31a4be761a3ba45b5c4a38a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e8c7537fd075e4246f3234a77e63f1f159e3b199e31a4be761a3ba45b5c4a38a/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 10:25:59 addons-718460 kubelet[1521]: E0929 10:25:59.139836    1521 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5ee349354454007cae19c426c4a670b433a53820e7dfbf8244765a5c4a1087d2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5ee349354454007cae19c426c4a670b433a53820e7dfbf8244765a5c4a1087d2/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 10:25:59 addons-718460 kubelet[1521]: E0929 10:25:59.140045    1521 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0efea5d4dc12d8e49cd3d6eb5c2304ce468239319eee158bcff8e76d9785592c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0efea5d4dc12d8e49cd3d6eb5c2304ce468239319eee158bcff8e76d9785592c/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 10:25:59 addons-718460 kubelet[1521]: E0929 10:25:59.173952    1521 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/812ed06ebc994d11a0093c0933ad4a872ca83836e9571845a24257fb28712082/diff" to get inode usage: stat /var/lib/containers/storage/overlay/812ed06ebc994d11a0093c0933ad4a872ca83836e9571845a24257fb28712082/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 10:25:59 addons-718460 kubelet[1521]: E0929 10:25:59.190521    1521 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/812ed06ebc994d11a0093c0933ad4a872ca83836e9571845a24257fb28712082/diff" to get inode usage: stat /var/lib/containers/storage/overlay/812ed06ebc994d11a0093c0933ad4a872ca83836e9571845a24257fb28712082/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 10:25:59 addons-718460 kubelet[1521]: E0929 10:25:59.207998    1521 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2c1ae1357818011a7a37d92a3e1206631f3aeeb6b8efeb482fa61f26ec78b536/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2c1ae1357818011a7a37d92a3e1206631f3aeeb6b8efeb482fa61f26ec78b536/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 10:25:59 addons-718460 kubelet[1521]: E0929 10:25:59.287290    1521 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141559286907929 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:25:59 addons-718460 kubelet[1521]: E0929 10:25:59.287328    1521 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141559286907929 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:26:09 addons-718460 kubelet[1521]: E0929 10:26:09.289741    1521 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141569289377944 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:26:09 addons-718460 kubelet[1521]: E0929 10:26:09.289785    1521 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141569289377944 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:26:19 addons-718460 kubelet[1521]: E0929 10:26:19.293037    1521 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141579292436122 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:26:19 addons-718460 kubelet[1521]: E0929 10:26:19.293080    1521 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141579292436122 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:26:29 addons-718460 kubelet[1521]: E0929 10:26:29.296086    1521 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141589295812054 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:26:29 addons-718460 kubelet[1521]: E0929 10:26:29.296125    1521 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141589295812054 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:26:39 addons-718460 kubelet[1521]: E0929 10:26:39.298465    1521 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141599298191181 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:26:39 addons-718460 kubelet[1521]: E0929 10:26:39.298503    1521 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141599298191181 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:26:47 addons-718460 kubelet[1521]: E0929 10:26:47.313568    1521 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/41e848a72243382fcd80e07e48edfee06b0ef37c99ce0baaca1d1f965a593993/diff" to get inode usage: stat /var/lib/containers/storage/overlay/41e848a72243382fcd80e07e48edfee06b0ef37c99ce0baaca1d1f965a593993/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 10:26:49 addons-718460 kubelet[1521]: E0929 10:26:49.180765    1521 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5203eebabb3a6ac618649bbfbe00b84e67aa9401db028e4de9b8b08f511351e0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5203eebabb3a6ac618649bbfbe00b84e67aa9401db028e4de9b8b08f511351e0/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 10:26:49 addons-718460 kubelet[1521]: E0929 10:26:49.304953    1521 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141609304666175 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:26:49 addons-718460 kubelet[1521]: E0929 10:26:49.304985    1521 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141609304666175 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 10:26:49 addons-718460 kubelet[1521]: I0929 10:26:49.481736    1521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhsv2\" (UniqueName: \"kubernetes.io/projected/2e0eba06-75fd-4d42-bce2-63bb642a8097-kube-api-access-zhsv2\") pod \"hello-world-app-5d498dc89-ffqdd\" (UID: \"2e0eba06-75fd-4d42-bce2-63bb642a8097\") " pod="default/hello-world-app-5d498dc89-ffqdd"
	Sep 29 10:26:49 addons-718460 kubelet[1521]: W0929 10:26:49.697281    1521 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2613950f87c2f553171eb6717dd64c983e396d0735f837d87083c6d52ff3b084/crio-60a3fcdd57f3ad7f50d9ffe30a8b2eedd2c3ba8ff2f3ad05391857e8db83c929 WatchSource:0}: Error finding container 60a3fcdd57f3ad7f50d9ffe30a8b2eedd2c3ba8ff2f3ad05391857e8db83c929: Status 404 returned error can't find the container with id 60a3fcdd57f3ad7f50d9ffe30a8b2eedd2c3ba8ff2f3ad05391857e8db83c929
	
	
	==> storage-provisioner [271807be28b6f740c9f0d69f75de4dac8c139cf52cee5dd5bf0290ac0ebf4091] <==
	W0929 10:26:25.897461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:27.901058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:27.907189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:29.910156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:29.914553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:31.918444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:31.922525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:33.925735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:33.932306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:35.934915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:35.939441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:37.942648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:37.947042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:39.949982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:39.954602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:41.957478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:41.963683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:43.967003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:43.972511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:45.976256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:45.983030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:47.985907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:47.990639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:50.013693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:26:50.024983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-718460 -n addons-718460
helpers_test.go:269: (dbg) Run:  kubectl --context addons-718460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-bkcrr ingress-nginx-admission-patch-k2kng
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-718460 describe pod ingress-nginx-admission-create-bkcrr ingress-nginx-admission-patch-k2kng
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-718460 describe pod ingress-nginx-admission-create-bkcrr ingress-nginx-admission-patch-k2kng: exit status 1 (102.549219ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bkcrr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-k2kng" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-718460 describe pod ingress-nginx-admission-create-bkcrr ingress-nginx-admission-patch-k2kng: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-718460 addons disable ingress-dns --alsologtostderr -v=1: (1.681584987s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-718460 addons disable ingress --alsologtostderr -v=1: (8.115469929s)
--- FAIL: TestAddons/parallel/Ingress (156.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (604.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-599498 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-599498 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-jpkpp" [421cb5b8-ffc4-4787-989a-f04a874d3cc9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0929 10:30:54.412041    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-599498 -n functional-599498
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-29 10:40:53.645731827 +0000 UTC m=+1257.987240160
functional_test.go:1645: (dbg) Run:  kubectl --context functional-599498 describe po hello-node-connect-7d85dfc575-jpkpp -n default
functional_test.go:1645: (dbg) kubectl --context functional-599498 describe po hello-node-connect-7d85dfc575-jpkpp -n default:
Name:             hello-node-connect-7d85dfc575-jpkpp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-599498/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:30:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7bpkt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7bpkt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-jpkpp to functional-599498
Normal   Pulling    7m4s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m4s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-599498 logs hello-node-connect-7d85dfc575-jpkpp -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-599498 logs hello-node-connect-7d85dfc575-jpkpp -n default: exit status 1 (110.763044ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-jpkpp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-599498 logs hello-node-connect-7d85dfc575-jpkpp -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-599498 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-jpkpp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-599498/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:30:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7bpkt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7bpkt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-jpkpp to functional-599498
Normal   Pulling    7m4s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m4s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-599498 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-599498 logs -l app=hello-node-connect: exit status 1 (89.270128ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-jpkpp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-599498 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-599498 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.83.4
IPs:                      10.102.83.4
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32598/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-599498
helpers_test.go:243: (dbg) docker inspect functional-599498:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e69f16173a200ca050d527d46e68704db9071ae8aeb9fc374319413b455bc2a2",
	        "Created": "2025-09-29T10:28:06.233436463Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 22069,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:28:06.292797009Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/e69f16173a200ca050d527d46e68704db9071ae8aeb9fc374319413b455bc2a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e69f16173a200ca050d527d46e68704db9071ae8aeb9fc374319413b455bc2a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e69f16173a200ca050d527d46e68704db9071ae8aeb9fc374319413b455bc2a2/hosts",
	        "LogPath": "/var/lib/docker/containers/e69f16173a200ca050d527d46e68704db9071ae8aeb9fc374319413b455bc2a2/e69f16173a200ca050d527d46e68704db9071ae8aeb9fc374319413b455bc2a2-json.log",
	        "Name": "/functional-599498",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-599498:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-599498",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e69f16173a200ca050d527d46e68704db9071ae8aeb9fc374319413b455bc2a2",
	                "LowerDir": "/var/lib/docker/overlay2/a2dc904697803956fe9b962b0fece5aed21d86b7c0153d74d13cbdda97055d81-init/diff:/var/lib/docker/overlay2/03dcb74e0e5b38ad12cb364793e3e5cf6f66af30c67c32b56aeac11291ac3658/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2dc904697803956fe9b962b0fece5aed21d86b7c0153d74d13cbdda97055d81/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2dc904697803956fe9b962b0fece5aed21d86b7c0153d74d13cbdda97055d81/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2dc904697803956fe9b962b0fece5aed21d86b7c0153d74d13cbdda97055d81/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-599498",
	                "Source": "/var/lib/docker/volumes/functional-599498/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-599498",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-599498",
	                "name.minikube.sigs.k8s.io": "functional-599498",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5831dd871f0145bbcf818faedc42a217c84376779e70e328e70be38df808c015",
	            "SandboxKey": "/var/run/docker/netns/5831dd871f01",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-599498": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:60:53:f2:ba:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d508d8a2e2a93d1e7fe0d9d9a266ec13f06df8a0bca499a03b00b57d3d75802",
	                    "EndpointID": "b743f400284254faa3e7bf34f1dcb4e791fa88b4b053e2c7d5e69aeeb054dfc8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-599498",
	                        "e69f16173a20"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-599498 -n functional-599498
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-599498 logs -n 25: (1.818968203s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-599498 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ kubectl │ functional-599498 kubectl -- --context functional-599498 get pods                                                          │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ start   │ -p functional-599498 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ service │ invalid-svc -p functional-599498                                                                                           │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │                     │
	│ config  │ functional-599498 config unset cpus                                                                                        │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ cp      │ functional-599498 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ config  │ functional-599498 config get cpus                                                                                          │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │                     │
	│ config  │ functional-599498 config set cpus 2                                                                                        │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ config  │ functional-599498 config get cpus                                                                                          │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ config  │ functional-599498 config unset cpus                                                                                        │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ config  │ functional-599498 config get cpus                                                                                          │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │                     │
	│ ssh     │ functional-599498 ssh -n functional-599498 sudo cat /home/docker/cp-test.txt                                               │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ ssh     │ functional-599498 ssh echo hello                                                                                           │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ cp      │ functional-599498 cp functional-599498:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3637814465/001/cp-test.txt │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ ssh     │ functional-599498 ssh cat /etc/hostname                                                                                    │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ tunnel  │ functional-599498 tunnel --alsologtostderr                                                                                 │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │                     │
	│ tunnel  │ functional-599498 tunnel --alsologtostderr                                                                                 │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │                     │
	│ ssh     │ functional-599498 ssh -n functional-599498 sudo cat /home/docker/cp-test.txt                                               │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ cp      │ functional-599498 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ tunnel  │ functional-599498 tunnel --alsologtostderr                                                                                 │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │                     │
	│ ssh     │ functional-599498 ssh -n functional-599498 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ addons  │ functional-599498 addons list                                                                                              │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	│ addons  │ functional-599498 addons list -o json                                                                                      │ functional-599498 │ jenkins │ v1.37.0 │ 29 Sep 25 10:30 UTC │ 29 Sep 25 10:30 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:30:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:30:01.916710   26826 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:30:01.916902   26826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:30:01.916905   26826 out.go:374] Setting ErrFile to fd 2...
	I0929 10:30:01.916909   26826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:30:01.917244   26826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	I0929 10:30:01.917768   26826 out.go:368] Setting JSON to false
	I0929 10:30:01.918788   26826 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":751,"bootTime":1759141051,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0929 10:30:01.918848   26826 start.go:140] virtualization:  
	I0929 10:30:01.922667   26826 out.go:179] * [functional-599498] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 10:30:01.925777   26826 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:30:01.925859   26826 notify.go:220] Checking for updates...
	I0929 10:30:01.931839   26826 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:30:01.934871   26826 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	I0929 10:30:01.937697   26826 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	I0929 10:30:01.940523   26826 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 10:30:01.943490   26826 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:30:01.947375   26826 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:30:01.947464   26826 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:30:01.976665   26826 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 10:30:01.976776   26826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:30:02.044807   26826 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-29 10:30:02.030992123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 10:30:02.044906   26826 docker.go:318] overlay module found
	I0929 10:30:02.048064   26826 out.go:179] * Using the docker driver based on existing profile
	I0929 10:30:02.051046   26826 start.go:304] selected driver: docker
	I0929 10:30:02.051056   26826 start.go:924] validating driver "docker" against &{Name:functional-599498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-599498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:30:02.051200   26826 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:30:02.051312   26826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:30:02.117157   26826 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-29 10:30:02.107707833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 10:30:02.117554   26826 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:30:02.117583   26826 cni.go:84] Creating CNI manager for ""
	I0929 10:30:02.117640   26826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:30:02.117678   26826 start.go:348] cluster config:
	{Name:functional-599498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-599498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:30:02.122601   26826 out.go:179] * Starting "functional-599498" primary control-plane node in "functional-599498" cluster
	I0929 10:30:02.125666   26826 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 10:30:02.128674   26826 out.go:179] * Pulling base image v0.0.48 ...
	I0929 10:30:02.131473   26826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:30:02.131520   26826 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0929 10:30:02.131528   26826 cache.go:58] Caching tarball of preloaded images
	I0929 10:30:02.131569   26826 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:30:02.131629   26826 preload.go:172] Found /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0929 10:30:02.131638   26826 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 10:30:02.131743   26826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/config.json ...
	I0929 10:30:02.157583   26826 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 10:30:02.157595   26826 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 10:30:02.157612   26826 cache.go:232] Successfully downloaded all kic artifacts
	I0929 10:30:02.157634   26826 start.go:360] acquireMachinesLock for functional-599498: {Name:mk47fb6fe4f9fe65244835f8eb0ebf08ce954830 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:30:02.157692   26826 start.go:364] duration metric: took 42.577µs to acquireMachinesLock for "functional-599498"
	I0929 10:30:02.157710   26826 start.go:96] Skipping create...Using existing machine configuration
	I0929 10:30:02.157714   26826 fix.go:54] fixHost starting: 
	I0929 10:30:02.157990   26826 cli_runner.go:164] Run: docker container inspect functional-599498 --format={{.State.Status}}
	I0929 10:30:02.175421   26826 fix.go:112] recreateIfNeeded on functional-599498: state=Running err=<nil>
	W0929 10:30:02.175440   26826 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 10:30:02.178765   26826 out.go:252] * Updating the running docker "functional-599498" container ...
	I0929 10:30:02.178792   26826 machine.go:93] provisionDockerMachine start ...
	I0929 10:30:02.178882   26826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
	I0929 10:30:02.198048   26826 main.go:141] libmachine: Using SSH client type: native
	I0929 10:30:02.198363   26826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0929 10:30:02.198370   26826 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 10:30:02.338576   26826 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-599498
	
	I0929 10:30:02.338589   26826 ubuntu.go:182] provisioning hostname "functional-599498"
	I0929 10:30:02.338664   26826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
	I0929 10:30:02.357283   26826 main.go:141] libmachine: Using SSH client type: native
	I0929 10:30:02.357610   26826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0929 10:30:02.357623   26826 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-599498 && echo "functional-599498" | sudo tee /etc/hostname
	I0929 10:30:02.515842   26826 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-599498
	
	I0929 10:30:02.515919   26826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
	I0929 10:30:02.535277   26826 main.go:141] libmachine: Using SSH client type: native
	I0929 10:30:02.535586   26826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0929 10:30:02.535600   26826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-599498' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-599498/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-599498' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:30:02.675343   26826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:30:02.675360   26826 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21657-2306/.minikube CaCertPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21657-2306/.minikube}
	I0929 10:30:02.675375   26826 ubuntu.go:190] setting up certificates
	I0929 10:30:02.675383   26826 provision.go:84] configureAuth start
	I0929 10:30:02.675460   26826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-599498
	I0929 10:30:02.693879   26826 provision.go:143] copyHostCerts
	I0929 10:30:02.693934   26826 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-2306/.minikube/key.pem, removing ...
	I0929 10:30:02.693950   26826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-2306/.minikube/key.pem
	I0929 10:30:02.694030   26826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21657-2306/.minikube/key.pem (1679 bytes)
	I0929 10:30:02.694124   26826 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-2306/.minikube/ca.pem, removing ...
	I0929 10:30:02.694128   26826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-2306/.minikube/ca.pem
	I0929 10:30:02.694152   26826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21657-2306/.minikube/ca.pem (1082 bytes)
	I0929 10:30:02.694205   26826 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-2306/.minikube/cert.pem, removing ...
	I0929 10:30:02.694209   26826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-2306/.minikube/cert.pem
	I0929 10:30:02.694230   26826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21657-2306/.minikube/cert.pem (1123 bytes)
	I0929 10:30:02.694271   26826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21657-2306/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca-key.pem org=jenkins.functional-599498 san=[127.0.0.1 192.168.49.2 functional-599498 localhost minikube]
	I0929 10:30:02.992837   26826 provision.go:177] copyRemoteCerts
	I0929 10:30:02.992886   26826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:30:02.992929   26826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
	I0929 10:30:03.018991   26826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
	I0929 10:30:03.120353   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:30:03.148054   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 10:30:03.173440   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 10:30:03.199754   26826 provision.go:87] duration metric: took 524.350548ms to configureAuth
	I0929 10:30:03.199771   26826 ubuntu.go:206] setting minikube options for container-runtime
	I0929 10:30:03.199959   26826 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:30:03.200080   26826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
	I0929 10:30:03.222710   26826 main.go:141] libmachine: Using SSH client type: native
	I0929 10:30:03.223002   26826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0929 10:30:03.223017   26826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 10:30:08.644200   26826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 10:30:08.644210   26826 machine.go:96] duration metric: took 6.465413494s to provisionDockerMachine
	I0929 10:30:08.644219   26826 start.go:293] postStartSetup for "functional-599498" (driver="docker")
	I0929 10:30:08.644228   26826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:30:08.644289   26826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:30:08.644335   26826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
	I0929 10:30:08.661893   26826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
	I0929 10:30:08.760000   26826 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:30:08.763010   26826 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 10:30:08.763032   26826 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 10:30:08.763041   26826 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 10:30:08.763047   26826 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 10:30:08.763055   26826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-2306/.minikube/addons for local assets ...
	I0929 10:30:08.763107   26826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-2306/.minikube/files for local assets ...
	I0929 10:30:08.763200   26826 filesync.go:149] local asset: /home/jenkins/minikube-integration/21657-2306/.minikube/files/etc/ssl/certs/41082.pem -> 41082.pem in /etc/ssl/certs
	I0929 10:30:08.763276   26826 filesync.go:149] local asset: /home/jenkins/minikube-integration/21657-2306/.minikube/files/etc/test/nested/copy/4108/hosts -> hosts in /etc/test/nested/copy/4108
	I0929 10:30:08.763319   26826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4108
	I0929 10:30:08.771807   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/files/etc/ssl/certs/41082.pem --> /etc/ssl/certs/41082.pem (1708 bytes)
	I0929 10:30:08.796094   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/files/etc/test/nested/copy/4108/hosts --> /etc/test/nested/copy/4108/hosts (40 bytes)
	I0929 10:30:08.820613   26826 start.go:296] duration metric: took 176.381547ms for postStartSetup
	I0929 10:30:08.820697   26826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:30:08.820733   26826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
	I0929 10:30:08.837623   26826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
	I0929 10:30:08.932534   26826 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 10:30:08.937492   26826 fix.go:56] duration metric: took 6.779771665s for fixHost
	I0929 10:30:08.937507   26826 start.go:83] releasing machines lock for "functional-599498", held for 6.779808014s
	I0929 10:30:08.937571   26826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-599498
	I0929 10:30:08.955344   26826 ssh_runner.go:195] Run: cat /version.json
	I0929 10:30:08.955376   26826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:30:08.955385   26826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
	I0929 10:30:08.955429   26826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
	I0929 10:30:08.981768   26826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
	I0929 10:30:08.981769   26826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
	I0929 10:30:09.217105   26826 ssh_runner.go:195] Run: systemctl --version
	I0929 10:30:09.221440   26826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 10:30:09.363331   26826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 10:30:09.368103   26826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:30:09.377360   26826 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 10:30:09.377435   26826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:30:09.386708   26826 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 10:30:09.386723   26826 start.go:495] detecting cgroup driver to use...
	I0929 10:30:09.386755   26826 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 10:30:09.386808   26826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:30:09.399759   26826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:30:09.412257   26826 docker.go:218] disabling cri-docker service (if available) ...
	I0929 10:30:09.412309   26826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 10:30:09.425798   26826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 10:30:09.437341   26826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 10:30:09.560591   26826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 10:30:09.687939   26826 docker.go:234] disabling docker service ...
	I0929 10:30:09.688000   26826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 10:30:09.701148   26826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 10:30:09.713303   26826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 10:30:09.835390   26826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 10:30:09.968579   26826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:30:09.980713   26826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:30:10.004875   26826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 10:30:10.004985   26826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:30:10.034928   26826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 10:30:10.034987   26826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:30:10.046779   26826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:30:10.057951   26826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:30:10.068745   26826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:30:10.078977   26826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:30:10.091988   26826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:30:10.102874   26826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:30:10.114544   26826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:30:10.124257   26826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:30:10.133642   26826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:30:10.262656   26826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 10:30:10.462455   26826 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 10:30:10.462511   26826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 10:30:10.466235   26826 start.go:563] Will wait 60s for crictl version
	I0929 10:30:10.466287   26826 ssh_runner.go:195] Run: which crictl
	I0929 10:30:10.469693   26826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:30:10.505643   26826 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 10:30:10.505729   26826 ssh_runner.go:195] Run: crio --version
	I0929 10:30:10.548266   26826 ssh_runner.go:195] Run: crio --version
	I0929 10:30:10.591095   26826 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 10:30:10.594127   26826 cli_runner.go:164] Run: docker network inspect functional-599498 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:30:10.610279   26826 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 10:30:10.617115   26826 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0929 10:30:10.619899   26826 kubeadm.go:875] updating cluster {Name:functional-599498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-599498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:30:10.620030   26826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:30:10.620112   26826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:30:10.702730   26826 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:30:10.702741   26826 crio.go:433] Images already preloaded, skipping extraction
	I0929 10:30:10.702794   26826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:30:10.770093   26826 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:30:10.770104   26826 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:30:10.770111   26826 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0929 10:30:10.770209   26826 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-599498 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-599498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:30:10.770282   26826 ssh_runner.go:195] Run: crio config
	I0929 10:30:10.852012   26826 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0929 10:30:10.852031   26826 cni.go:84] Creating CNI manager for ""
	I0929 10:30:10.852040   26826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:30:10.852050   26826 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:30:10.852070   26826 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-599498 NodeName:functional-599498 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:30:10.852205   26826 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-599498"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:30:10.852267   26826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:30:10.860882   26826 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:30:10.860938   26826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:30:10.869539   26826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0929 10:30:10.887875   26826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:30:10.905928   26826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0929 10:30:10.925298   26826 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 10:30:10.929134   26826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:30:11.053514   26826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:30:11.066743   26826 certs.go:68] Setting up /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498 for IP: 192.168.49.2
	I0929 10:30:11.066755   26826 certs.go:194] generating shared ca certs ...
	I0929 10:30:11.066769   26826 certs.go:226] acquiring lock for ca certs: {Name:mkddeaa430ffcc39cce53e20ea2b5588c6828a36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:30:11.066911   26826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21657-2306/.minikube/ca.key
	I0929 10:30:11.066956   26826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.key
	I0929 10:30:11.066962   26826 certs.go:256] generating profile certs ...
	I0929 10:30:11.067048   26826 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.key
	I0929 10:30:11.067096   26826 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/apiserver.key.bddd0bb6
	I0929 10:30:11.067154   26826 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/proxy-client.key
	I0929 10:30:11.067266   26826 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/4108.pem (1338 bytes)
	W0929 10:30:11.067314   26826 certs.go:480] ignoring /home/jenkins/minikube-integration/21657-2306/.minikube/certs/4108_empty.pem, impossibly tiny 0 bytes
	I0929 10:30:11.067321   26826 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:30:11.067348   26826 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:30:11.067372   26826 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:30:11.067391   26826 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/key.pem (1679 bytes)
	I0929 10:30:11.067429   26826 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/files/etc/ssl/certs/41082.pem (1708 bytes)
	I0929 10:30:11.068008   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:30:11.095189   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 10:30:11.121082   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:30:11.149035   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 10:30:11.177823   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 10:30:11.202658   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 10:30:11.227953   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:30:11.255266   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 10:30:11.281294   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/files/etc/ssl/certs/41082.pem --> /usr/share/ca-certificates/41082.pem (1708 bytes)
	I0929 10:30:11.305828   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:30:11.330252   26826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/certs/4108.pem --> /usr/share/ca-certificates/4108.pem (1338 bytes)
	I0929 10:30:11.354066   26826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:30:11.371496   26826 ssh_runner.go:195] Run: openssl version
	I0929 10:30:11.376677   26826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4108.pem && ln -fs /usr/share/ca-certificates/4108.pem /etc/ssl/certs/4108.pem"
	I0929 10:30:11.386079   26826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4108.pem
	I0929 10:30:11.389595   26826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 10:28 /usr/share/ca-certificates/4108.pem
	I0929 10:30:11.389653   26826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4108.pem
	I0929 10:30:11.396557   26826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4108.pem /etc/ssl/certs/51391683.0"
	I0929 10:30:11.405831   26826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41082.pem && ln -fs /usr/share/ca-certificates/41082.pem /etc/ssl/certs/41082.pem"
	I0929 10:30:11.415625   26826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41082.pem
	I0929 10:30:11.419043   26826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 10:28 /usr/share/ca-certificates/41082.pem
	I0929 10:30:11.419095   26826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41082.pem
	I0929 10:30:11.426060   26826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41082.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 10:30:11.435062   26826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:30:11.444394   26826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:30:11.447964   26826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:30:11.448016   26826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:30:11.454846   26826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:30:11.463599   26826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:30:11.466983   26826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 10:30:11.473883   26826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 10:30:11.480885   26826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 10:30:11.487702   26826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 10:30:11.494628   26826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 10:30:11.501792   26826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 10:30:11.508557   26826 kubeadm.go:392] StartCluster: {Name:functional-599498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-599498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:30:11.508634   26826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 10:30:11.508704   26826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 10:30:11.551657   26826 cri.go:89] found id: "d10742c389a99d3c544fb08a1b7e154933c3c2d79f3a55c8e0d87e8d3e4bd134"
	I0929 10:30:11.551668   26826 cri.go:89] found id: "86e2a9113dfe36181221a569feda477c7ca1a5262a39f8b0e50d96bce1862b14"
	I0929 10:30:11.551671   26826 cri.go:89] found id: "ca079d3703e61815b070e3e6d6b58b658edd64f359b88a1dead64739531ba853"
	I0929 10:30:11.551674   26826 cri.go:89] found id: "eeb2a88f9ecc54558ea729a540d216a517e179c70a24cc920d09a8efc384de08"
	I0929 10:30:11.551676   26826 cri.go:89] found id: "0b9a8578f993c661aa89e493d37ab5870635e9842c264ade78908674b096cfc3"
	I0929 10:30:11.551679   26826 cri.go:89] found id: "5084e8697ad568cd5b744581c7dc5eb861e245c4f3ada3eee7bea2ea5b94d734"
	I0929 10:30:11.551682   26826 cri.go:89] found id: "25149d1322f71f7f9262b0f0a939a157f9cab18b719db1a6b08d1e682605f672"
	I0929 10:30:11.551684   26826 cri.go:89] found id: "838807ead68afd255e408c2a35f0ba8197a5f05dbbdd74cae27040599dddd968"
	I0929 10:30:11.551686   26826 cri.go:89] found id: ""
	I0929 10:30:11.551749   26826 ssh_runner.go:195] Run: sudo runc list -f json
	I0929 10:30:11.574481   26826 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0b9a8578f993c661aa89e493d37ab5870635e9842c264ade78908674b096cfc3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0b9a8578f993c661aa89e493d37ab5870635e9842c264ade78908674b096cfc3/userdata","rootfs":"/var/lib/containers/storage/overlay/fa3ed5572b093bcefd45467750d2694b4e071e3081eee4c2d8dd724f2dba1e3a/merged","created":"2025-09-29T10:29:33.937153194Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e2e56a4","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e2e56a4\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessa
gePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0b9a8578f993c661aa89e493d37ab5870635e9842c264ade78908674b096cfc3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T10:29:33.616165484Z","io.kubernetes.cri-o.Image":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.34.0","io.kubernetes.cri-o.ImageRef":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-2s84x\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"efba638e-1a4f-400b-b7a1-32fbf390a219\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-2s84x_efba638e-1a4f-400b-b7a1-32fbf390a219/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoin
t":"/var/lib/containers/storage/overlay/fa3ed5572b093bcefd45467750d2694b4e071e3081eee4c2d8dd724f2dba1e3a/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-2s84x_kube-system_efba638e-1a4f-400b-b7a1-32fbf390a219_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/fab5fc42698c6145ccaf1bc6ffd301454e89fdc2ddb6ba9aaaef5fb1a7477675/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"fab5fc42698c6145ccaf1bc6ffd301454e89fdc2ddb6ba9aaaef5fb1a7477675","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-2s84x_kube-system_efba638e-1a4f-400b-b7a1-32fbf390a219_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\
":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/efba638e-1a4f-400b-b7a1-32fbf390a219/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/efba638e-1a4f-400b-b7a1-32fbf390a219/containers/kube-proxy/eb5888ba\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/efba638e-1a4f-400b-b7a1-32fbf390a219/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/efba638e-1a4f-400b-b7a1-32fbf390a219/volumes/kubernetes.io~projected/kube-api-access-v6wkt\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-2s84x","io.kubernetes.pod.namespace":"kube-system","io.kuber
netes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"efba638e-1a4f-400b-b7a1-32fbf390a219","kubernetes.io/config.seen":"2025-09-29T10:28:38.529072078Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25149d1322f71f7f9262b0f0a939a157f9cab18b719db1a6b08d1e682605f672","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/25149d1322f71f7f9262b0f0a939a157f9cab18b719db1a6b08d1e682605f672/userdata","rootfs":"/var/lib/containers/storage/overlay/cd65ec0ebe59697858404ec5cd6125bcac3486fc2a29c07bd215d1bc168f2401/merged","created":"2025-09-29T10:29:33.692456356Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.con
tainer.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8441,\\\"containerPort\\\":8441,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"25149d1322f71f7f9262b0f0a939a157f9cab18b719db1a6b08d1e682605f672","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T10:29:33.557687973Z","io.kubernetes.cri-o.Image":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","io.kubernetes.cri-o.Labels":"{\"
io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-599498\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f047b95540bd3307fb358ab6dbfa100b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-599498_f047b95540bd3307fb358ab6dbfa100b/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cd65ec0ebe59697858404ec5cd6125bcac3486fc2a29c07bd215d1bc168f2401/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-599498_kube-system_f047b95540bd3307fb358ab6dbfa100b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/aece92f6e95d0d58eaddc5ceaeb296c5a56bbc3ca57362513b7a867d09ecd2a5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"aece92f6e95d0d58eaddc5ceaeb296c5a56bbc3ca57362513b7a867d09ecd2a5","io.kubernetes.cri-o.SandboxName":"k8s_ku
be-apiserver-functional-599498_kube-system_f047b95540bd3307fb358ab6dbfa100b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f047b95540bd3307fb358ab6dbfa100b/containers/kube-apiserver/42c2af46\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f047b95540bd3307fb358ab6dbfa100b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\
"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-599498","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f047b95540bd3307fb358ab6dbfa100b","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"f047b95540bd3307fb358ab6dbfa100b","kubernetes.io/config.seen":"2025-09-29T10:28:25.018472937Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5084e8697ad568cd5b744581c7dc5eb861e245c4f3ada3eee7bea2ea5b94d734","pid":0,"status":"stopped","bundle":"/run/co
ntainers/storage/overlay-containers/5084e8697ad568cd5b744581c7dc5eb861e245c4f3ada3eee7bea2ea5b94d734/userdata","rootfs":"/var/lib/containers/storage/overlay/3a1a4e440593bec222408fcbeb1f0ec4c15a81d8342505639a25b21dc9c3d61a/merged","created":"2025-09-29T10:29:33.72666822Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminat
ionMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5084e8697ad568cd5b744581c7dc5eb861e245c4f3ada3eee7bea2ea5b94d734","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T10:29:33.583747864Z","io.kubernetes.cri-o.Image":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-599498\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"229d2bb9173ac3f3d836c1d7b9553931\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-599498_229d2bb9173ac3f3d836c1d7b9553931/kube-
scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3a1a4e440593bec222408fcbeb1f0ec4c15a81d8342505639a25b21dc9c3d61a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-599498_kube-system_229d2bb9173ac3f3d836c1d7b9553931_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/84927629cbbd739d785bf714f72f4611eaa887629c31c8e49ecd7d307b791d6a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"84927629cbbd739d785bf714f72f4611eaa887629c31c8e49ecd7d307b791d6a","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-599498_kube-system_229d2bb9173ac3f3d836c1d7b9553931_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/229d2bb9173ac3f3d
836c1d7b9553931/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/229d2bb9173ac3f3d836c1d7b9553931/containers/kube-scheduler/91bc5679\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-599498","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"229d2bb9173ac3f3d836c1d7b9553931","kubernetes.io/config.hash":"229d2bb9173ac3f3d836c1d7b9553931","kubernetes.io/config.seen":"2025-09-29T10:28:25.018475522Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"838807ead68afd255e408c2a35f0ba8197a5f05dbbdd74cae27040599dddd968","pid":0,"status":"stopped","bundle":"/run/containers/storag
e/overlay-containers/838807ead68afd255e408c2a35f0ba8197a5f05dbbdd74cae27040599dddd968/userdata","rootfs":"/var/lib/containers/storage/overlay/c6ec7d1995f10cd935f2e1e0011b7a512558cc377329d08989a1cb7e95492e09/merged","created":"2025-09-29T10:29:33.630091518Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"838807ead68afd255e408c2a35f0ba8197a5f05dbbdd74cae27040599dddd968","io.kubernetes.cri-o.Container
Type":"container","io.kubernetes.cri-o.Created":"2025-09-29T10:29:33.535659082Z","io.kubernetes.cri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1523dcbc-3074-4a83-b79d-31d0fc96a8b0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1523dcbc-3074-4a83-b79d-31d0fc96a8b0/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c6ec7d1995f10cd935f2e1e0011b7a512558cc377329d08989a1cb7e95492e09/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner
_storage-provisioner_kube-system_1523dcbc-3074-4a83-b79d-31d0fc96a8b0_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e7efb6f3d4f659f9818be6bcc5e12df2d5a7dadb96f937aa28ebf2bbefd29493/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e7efb6f3d4f659f9818be6bcc5e12df2d5a7dadb96f937aa28ebf2bbefd29493","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_1523dcbc-3074-4a83-b79d-31d0fc96a8b0_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1523dcbc-3074-4a83-b79d-31d0fc96a8b0/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1523dc
bc-3074-4a83-b79d-31d0fc96a8b0/containers/storage-provisioner/5e5be339\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1523dcbc-3074-4a83-b79d-31d0fc96a8b0/volumes/kubernetes.io~projected/kube-api-access-tb96x\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1523dcbc-3074-4a83-b79d-31d0fc96a8b0","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\"
,\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2025-09-29T10:29:21.035338882Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"86e2a9113dfe36181221a569feda477c7ca1a5262a39f8b0e50d96bce1862b14","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/86e2a9113dfe36181221a569feda477c7ca1a5262a39f8b0e50d96bce1862b14/userdata","rootfs":"/var/lib/containers/storage/overlay/3d35feaee1344612fda4c9b80a9a86554785c19ae6f90fda11f2abd7a28b369a/merged","created":"2025-09-29T10:29:33.891857652Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"127fdb84","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.containe
r.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"127fdb84\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"86e2a9113dfe36181221a569feda477c7ca1a5262a39f8b0e50d96bce1862b14","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T10:29:33.710503665Z","io.kubernetes.cri-o.Image":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri-o.ImageRef":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"ki
ndnet-s5dvx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"85b8e319-b1c6-47ad-bbc9-aa68a0a6c791\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-s5dvx_85b8e319-b1c6-47ad-bbc9-aa68a0a6c791/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3d35feaee1344612fda4c9b80a9a86554785c19ae6f90fda11f2abd7a28b369a/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-s5dvx_kube-system_85b8e319-b1c6-47ad-bbc9-aa68a0a6c791_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7de938d84e14b7f838d8cc4f408a75a95a465ac436d9f2de009dcea68cfd0ec5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7de938d84e14b7f838d8cc4f408a75a95a465ac436d9f2de009dcea68cfd0ec5","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-s5dvx_kube-system_85b8e319-b1c6-47ad-bbc9-aa68a0a6c791_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin"
:"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/85b8e319-b1c6-47ad-bbc9-aa68a0a6c791/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/85b8e319-b1c6-47ad-bbc9-aa68a0a6c791/containers/kindnet-cni/82e0f910\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/k
ubelet/pods/85b8e319-b1c6-47ad-bbc9-aa68a0a6c791/volumes/kubernetes.io~projected/kube-api-access-rmb54\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-s5dvx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"85b8e319-b1c6-47ad-bbc9-aa68a0a6c791","kubernetes.io/config.seen":"2025-09-29T10:28:38.528316132Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca079d3703e61815b070e3e6d6b58b658edd64f359b88a1dead64739531ba853","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ca079d3703e61815b070e3e6d6b58b658edd64f359b88a1dead64739531ba853/userdata","rootfs":"/var/lib/containers/storage/overlay/ef223b374502796650063896722ed2fb712fbc5da36f9e8667c968339b09aebe/merged","created":"2025-09-29T10:29:33.849651348Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-cont
roller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ca079d3703e61815b070e3e6d6b58b658edd64f359b88a1dead64739531ba853","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T10:29:33.686796242Z","io.kubernetes.cri-o.I
mage":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-599498\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b136f0acf1d5bb3927a972cd2fc5c2dc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-599498_b136f0acf1d5bb3927a972cd2fc5c2dc/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ef223b374502796650063896722ed2fb712fbc5da36f9e8667c968339b09aebe/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-599498_kub
e-system_b136f0acf1d5bb3927a972cd2fc5c2dc_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cb776ebe278bd1ab9207f7183f81cc8770081965b9fcb34cef6b5374bca8ce21/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cb776ebe278bd1ab9207f7183f81cc8770081965b9fcb34cef6b5374bca8ce21","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-599498_kube-system_b136f0acf1d5bb3927a972cd2fc5c2dc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b136f0acf1d5bb3927a972cd2fc5c2dc/containers/kube-controller-manager/95383b65\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/h
osts\",\"host_path\":\"/var/lib/kubelet/pods/b136f0acf1d5bb3927a972cd2fc5c2dc/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volum
e/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-599498","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b136f0acf1d5bb3927a972cd2fc5c2dc","kubernetes.io/config.hash":"b136f0acf1d5bb3927a972cd2fc5c2dc","kubernetes.io/config.seen":"2025-09-29T10:28:25.018474414Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d10742c389a99d3c544fb08a1b7e154933c3c2d79f3a55c8e0d87e8d3e4bd134","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d10742c389a99d3c544fb08a1b7e154933c3c2d79f3a55c8e0d87e8d3e4bd134/userdata","rootfs":"/var/lib/containers/storage/overlay/4ada4d690f85702f4ad664d908edb92cd66cb5a17a11ec396ec899faaf201ed5/merged","created":"2025-09-29T10:29:33.859154357Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.conta
iner.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d10742c389a99d3c544fb08a1b7e154933c3c2d79f3a55c8e0d87e8d3e4bd134","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T10:
29:33.718903348Z","io.kubernetes.cri-o.Image":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-599498\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c55b7db6ee3ae87388c61efd5156bc63\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-599498_c55b7db6ee3ae87388c61efd5156bc63/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4ada4d690f85702f4ad664d908edb92cd66cb5a17a11ec396ec899faaf201ed5/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-599498_kube-system_c55b7db6ee3ae87388c61efd5156bc63_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-c
ontainers/85a98d57bbaa441fcae82507f374eca9aff00a8f615edab0ff348846559d13c0/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"85a98d57bbaa441fcae82507f374eca9aff00a8f615edab0ff348846559d13c0","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-599498_kube-system_c55b7db6ee3ae87388c61efd5156bc63_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c55b7db6ee3ae87388c61efd5156bc63/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c55b7db6ee3ae87388c61efd5156bc63/containers/etcd/2a456ba4\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\
":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-599498","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c55b7db6ee3ae87388c61efd5156bc63","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"c55b7db6ee3ae87388c61efd5156bc63","kubernetes.io/config.seen":"2025-09-29T10:28:25.018467177Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eeb2a88f9ecc54558ea729a540d216a517e179c70a24cc920d09a8efc384de08","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/eeb2a88f9ecc54558ea729a540d216a517e179c70a24cc920d09a8efc384de08/userdata","rootfs":"/var/lib/containers/storage/overlay/abedd41214f4c050161208ac5fdee3ee3d6b9bf8481f696d19de28c9969a7436/merged","created":"2025-09-
29T10:29:33.900253946Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9bf792","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9bf792\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"con
tainerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"liveness-probe\\\",\\\"containerPort\\\":8080,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"readiness-probe\\\",\\\"containerPort\\\":8181,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"eeb2a88f9ecc54558ea729a540d216a517e179c70a24cc920d09a8efc384de08","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T10:29:33.652752011Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri-o.ImageRef":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","io.kubernetes.cri-o.Lab
els":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bc5c9577-ccwkf\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0f8e44fa-650f-41f6-8e26-c024052e1986\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bc5c9577-ccwkf_0f8e44fa-650f-41f6-8e26-c024052e1986/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/abedd41214f4c050161208ac5fdee3ee3d6b9bf8481f696d19de28c9969a7436/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bc5c9577-ccwkf_kube-system_0f8e44fa-650f-41f6-8e26-c024052e1986_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/6d1d7742a680e34cb39dba6d0dc11268463b6b730236f4a7aa5ccd7ece011c59/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6d1d7742a680e34cb39dba6d0dc11268463b6b730236f4a7aa5ccd7ece011c59","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bc5c9577-ccwkf_kube-sys
tem_0f8e44fa-650f-41f6-8e26-c024052e1986_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/0f8e44fa-650f-41f6-8e26-c024052e1986/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0f8e44fa-650f-41f6-8e26-c024052e1986/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0f8e44fa-650f-41f6-8e26-c024052e1986/containers/coredns/e77460cc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/0f8e44fa-650f-41f6-8e26-c024052e1986/volumes/kubernetes.io~proj
ected/kube-api-access-6t6gp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bc5c9577-ccwkf","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0f8e44fa-650f-41f6-8e26-c024052e1986","kubernetes.io/config.seen":"2025-09-29T10:29:21.039280060Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0929 10:30:11.575241   26826 cri.go:126] list returned 8 containers
	I0929 10:30:11.575250   26826 cri.go:129] container: {ID:0b9a8578f993c661aa89e493d37ab5870635e9842c264ade78908674b096cfc3 Status:stopped}
	I0929 10:30:11.575263   26826 cri.go:135] skipping {0b9a8578f993c661aa89e493d37ab5870635e9842c264ade78908674b096cfc3 stopped}: state = "stopped", want "paused"
	I0929 10:30:11.575271   26826 cri.go:129] container: {ID:25149d1322f71f7f9262b0f0a939a157f9cab18b719db1a6b08d1e682605f672 Status:stopped}
	I0929 10:30:11.575275   26826 cri.go:135] skipping {25149d1322f71f7f9262b0f0a939a157f9cab18b719db1a6b08d1e682605f672 stopped}: state = "stopped", want "paused"
	I0929 10:30:11.575279   26826 cri.go:129] container: {ID:5084e8697ad568cd5b744581c7dc5eb861e245c4f3ada3eee7bea2ea5b94d734 Status:stopped}
	I0929 10:30:11.575284   26826 cri.go:135] skipping {5084e8697ad568cd5b744581c7dc5eb861e245c4f3ada3eee7bea2ea5b94d734 stopped}: state = "stopped", want "paused"
	I0929 10:30:11.575293   26826 cri.go:129] container: {ID:838807ead68afd255e408c2a35f0ba8197a5f05dbbdd74cae27040599dddd968 Status:stopped}
	I0929 10:30:11.575298   26826 cri.go:135] skipping {838807ead68afd255e408c2a35f0ba8197a5f05dbbdd74cae27040599dddd968 stopped}: state = "stopped", want "paused"
	I0929 10:30:11.575307   26826 cri.go:129] container: {ID:86e2a9113dfe36181221a569feda477c7ca1a5262a39f8b0e50d96bce1862b14 Status:stopped}
	I0929 10:30:11.575320   26826 cri.go:135] skipping {86e2a9113dfe36181221a569feda477c7ca1a5262a39f8b0e50d96bce1862b14 stopped}: state = "stopped", want "paused"
	I0929 10:30:11.575329   26826 cri.go:129] container: {ID:ca079d3703e61815b070e3e6d6b58b658edd64f359b88a1dead64739531ba853 Status:stopped}
	I0929 10:30:11.575334   26826 cri.go:135] skipping {ca079d3703e61815b070e3e6d6b58b658edd64f359b88a1dead64739531ba853 stopped}: state = "stopped", want "paused"
	I0929 10:30:11.575339   26826 cri.go:129] container: {ID:d10742c389a99d3c544fb08a1b7e154933c3c2d79f3a55c8e0d87e8d3e4bd134 Status:stopped}
	I0929 10:30:11.575347   26826 cri.go:135] skipping {d10742c389a99d3c544fb08a1b7e154933c3c2d79f3a55c8e0d87e8d3e4bd134 stopped}: state = "stopped", want "paused"
	I0929 10:30:11.575350   26826 cri.go:129] container: {ID:eeb2a88f9ecc54558ea729a540d216a517e179c70a24cc920d09a8efc384de08 Status:stopped}
	I0929 10:30:11.575355   26826 cri.go:135] skipping {eeb2a88f9ecc54558ea729a540d216a517e179c70a24cc920d09a8efc384de08 stopped}: state = "stopped", want "paused"
	I0929 10:30:11.575419   26826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:30:11.584313   26826 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 10:30:11.584330   26826 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 10:30:11.584379   26826 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 10:30:11.592837   26826 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 10:30:11.593423   26826 kubeconfig.go:125] found "functional-599498" server: "https://192.168.49.2:8441"
	I0929 10:30:11.594963   26826 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 10:30:11.604743   26826 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-09-29 10:28:16.741312270 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-09-29 10:30:10.916188805 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0929 10:30:11.604760   26826 kubeadm.go:1152] stopping kube-system containers ...
	I0929 10:30:11.604771   26826 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0929 10:30:11.604823   26826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 10:30:11.648750   26826 cri.go:89] found id: "d10742c389a99d3c544fb08a1b7e154933c3c2d79f3a55c8e0d87e8d3e4bd134"
	I0929 10:30:11.648760   26826 cri.go:89] found id: "86e2a9113dfe36181221a569feda477c7ca1a5262a39f8b0e50d96bce1862b14"
	I0929 10:30:11.648764   26826 cri.go:89] found id: "ca079d3703e61815b070e3e6d6b58b658edd64f359b88a1dead64739531ba853"
	I0929 10:30:11.648766   26826 cri.go:89] found id: "eeb2a88f9ecc54558ea729a540d216a517e179c70a24cc920d09a8efc384de08"
	I0929 10:30:11.648768   26826 cri.go:89] found id: "0b9a8578f993c661aa89e493d37ab5870635e9842c264ade78908674b096cfc3"
	I0929 10:30:11.648771   26826 cri.go:89] found id: "5084e8697ad568cd5b744581c7dc5eb861e245c4f3ada3eee7bea2ea5b94d734"
	I0929 10:30:11.648783   26826 cri.go:89] found id: "25149d1322f71f7f9262b0f0a939a157f9cab18b719db1a6b08d1e682605f672"
	I0929 10:30:11.648785   26826 cri.go:89] found id: "838807ead68afd255e408c2a35f0ba8197a5f05dbbdd74cae27040599dddd968"
	I0929 10:30:11.648787   26826 cri.go:89] found id: ""
	I0929 10:30:11.648792   26826 cri.go:252] Stopping containers: [d10742c389a99d3c544fb08a1b7e154933c3c2d79f3a55c8e0d87e8d3e4bd134 86e2a9113dfe36181221a569feda477c7ca1a5262a39f8b0e50d96bce1862b14 ca079d3703e61815b070e3e6d6b58b658edd64f359b88a1dead64739531ba853 eeb2a88f9ecc54558ea729a540d216a517e179c70a24cc920d09a8efc384de08 0b9a8578f993c661aa89e493d37ab5870635e9842c264ade78908674b096cfc3 5084e8697ad568cd5b744581c7dc5eb861e245c4f3ada3eee7bea2ea5b94d734 25149d1322f71f7f9262b0f0a939a157f9cab18b719db1a6b08d1e682605f672 838807ead68afd255e408c2a35f0ba8197a5f05dbbdd74cae27040599dddd968]
	I0929 10:30:11.648847   26826 ssh_runner.go:195] Run: which crictl
	I0929 10:30:11.652193   26826 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 d10742c389a99d3c544fb08a1b7e154933c3c2d79f3a55c8e0d87e8d3e4bd134 86e2a9113dfe36181221a569feda477c7ca1a5262a39f8b0e50d96bce1862b14 ca079d3703e61815b070e3e6d6b58b658edd64f359b88a1dead64739531ba853 eeb2a88f9ecc54558ea729a540d216a517e179c70a24cc920d09a8efc384de08 0b9a8578f993c661aa89e493d37ab5870635e9842c264ade78908674b096cfc3 5084e8697ad568cd5b744581c7dc5eb861e245c4f3ada3eee7bea2ea5b94d734 25149d1322f71f7f9262b0f0a939a157f9cab18b719db1a6b08d1e682605f672 838807ead68afd255e408c2a35f0ba8197a5f05dbbdd74cae27040599dddd968
	I0929 10:30:11.737208   26826 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0929 10:30:11.854327   26826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:30:11.863461   26826 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Sep 29 10:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Sep 29 10:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Sep 29 10:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Sep 29 10:28 /etc/kubernetes/scheduler.conf
	
	I0929 10:30:11.863528   26826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0929 10:30:11.872248   26826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0929 10:30:11.880793   26826 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 10:30:11.880847   26826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:30:11.889539   26826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0929 10:30:11.898105   26826 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 10:30:11.898167   26826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:30:11.906759   26826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0929 10:30:11.916093   26826 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 10:30:11.916170   26826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:30:11.925067   26826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:30:11.933950   26826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 10:30:11.989910   26826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 10:30:14.705945   26826 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.71601037s)
	I0929 10:30:14.705971   26826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0929 10:30:14.895596   26826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 10:30:14.961948   26826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0929 10:30:15.070972   26826 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:30:15.071055   26826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:30:15.571945   26826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:30:16.072008   26826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:30:16.107167   26826 api_server.go:72] duration metric: took 1.036152031s to wait for apiserver process to appear ...
	I0929 10:30:16.107182   26826 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:30:16.107201   26826 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 10:30:19.447924   26826 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 10:30:19.447940   26826 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 10:30:19.447952   26826 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 10:30:19.694924   26826 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 10:30:19.694948   26826 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 10:30:19.694964   26826 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 10:30:19.705447   26826 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 10:30:19.705464   26826 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 10:30:20.108032   26826 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 10:30:20.117227   26826 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 10:30:20.117248   26826 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 10:30:20.607870   26826 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 10:30:20.623480   26826 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 10:30:20.623495   26826 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 10:30:21.108153   26826 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 10:30:21.122051   26826 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0929 10:30:21.143836   26826 api_server.go:141] control plane version: v1.34.0
	I0929 10:30:21.143862   26826 api_server.go:131] duration metric: took 5.036674066s to wait for apiserver health ...
	I0929 10:30:21.143870   26826 cni.go:84] Creating CNI manager for ""
	I0929 10:30:21.143875   26826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:30:21.150111   26826 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 10:30:21.153896   26826 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 10:30:21.160579   26826 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 10:30:21.160590   26826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 10:30:21.180226   26826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 10:30:21.631929   26826 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:30:21.635910   26826 system_pods.go:59] 8 kube-system pods found
	I0929 10:30:21.635933   26826 system_pods.go:61] "coredns-66bc5c9577-ccwkf" [0f8e44fa-650f-41f6-8e26-c024052e1986] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:30:21.635940   26826 system_pods.go:61] "etcd-functional-599498" [4ac94e5f-c7e5-4e50-8aa6-323cc6ec2339] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:30:21.635946   26826 system_pods.go:61] "kindnet-s5dvx" [85b8e319-b1c6-47ad-bbc9-aa68a0a6c791] Running
	I0929 10:30:21.635962   26826 system_pods.go:61] "kube-apiserver-functional-599498" [02330d8e-efea-47b4-884e-000303502db6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 10:30:21.635968   26826 system_pods.go:61] "kube-controller-manager-functional-599498" [d469901a-1ded-4521-bb3d-124d4e72633f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:30:21.635973   26826 system_pods.go:61] "kube-proxy-2s84x" [efba638e-1a4f-400b-b7a1-32fbf390a219] Running
	I0929 10:30:21.635979   26826 system_pods.go:61] "kube-scheduler-functional-599498" [6af43ff9-35ae-423d-a268-298cc8b78d33] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:30:21.635982   26826 system_pods.go:61] "storage-provisioner" [1523dcbc-3074-4a83-b79d-31d0fc96a8b0] Running
	I0929 10:30:21.635989   26826 system_pods.go:74] duration metric: took 4.049158ms to wait for pod list to return data ...
	I0929 10:30:21.635996   26826 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:30:21.639034   26826 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 10:30:21.639052   26826 node_conditions.go:123] node cpu capacity is 2
	I0929 10:30:21.639073   26826 node_conditions.go:105] duration metric: took 3.072839ms to run NodePressure ...
	I0929 10:30:21.639091   26826 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 10:30:21.913213   26826 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0929 10:30:21.916742   26826 kubeadm.go:735] kubelet initialised
	I0929 10:30:21.916752   26826 kubeadm.go:736] duration metric: took 3.526852ms waiting for restarted kubelet to initialise ...
	I0929 10:30:21.916766   26826 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:30:21.924366   26826 ops.go:34] apiserver oom_adj: -16
	I0929 10:30:21.924378   26826 kubeadm.go:593] duration metric: took 10.340043369s to restartPrimaryControlPlane
	I0929 10:30:21.924385   26826 kubeadm.go:394] duration metric: took 10.415838616s to StartCluster
	I0929 10:30:21.924400   26826 settings.go:142] acquiring lock: {Name:mk5a393e91300013a868ee870b6bf3cfd60dd530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:30:21.924458   26826 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21657-2306/kubeconfig
	I0929 10:30:21.925077   26826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/kubeconfig: {Name:mk74c1842d39026f9853151eb440c757ec3be664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:30:21.925284   26826 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:30:21.925511   26826 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:30:21.925548   26826 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 10:30:21.925604   26826 addons.go:69] Setting storage-provisioner=true in profile "functional-599498"
	I0929 10:30:21.925617   26826 addons.go:238] Setting addon storage-provisioner=true in "functional-599498"
	W0929 10:30:21.925623   26826 addons.go:247] addon storage-provisioner should already be in state true
	I0929 10:30:21.925641   26826 host.go:66] Checking if "functional-599498" exists ...
	I0929 10:30:21.926040   26826 cli_runner.go:164] Run: docker container inspect functional-599498 --format={{.State.Status}}
	I0929 10:30:21.926450   26826 addons.go:69] Setting default-storageclass=true in profile "functional-599498"
	I0929 10:30:21.926462   26826 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-599498"
	I0929 10:30:21.926724   26826 cli_runner.go:164] Run: docker container inspect functional-599498 --format={{.State.Status}}
	I0929 10:30:21.928944   26826 out.go:179] * Verifying Kubernetes components...
	I0929 10:30:21.932205   26826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:30:21.954059   26826 addons.go:238] Setting addon default-storageclass=true in "functional-599498"
	W0929 10:30:21.954069   26826 addons.go:247] addon default-storageclass should already be in state true
	I0929 10:30:21.954092   26826 host.go:66] Checking if "functional-599498" exists ...
	I0929 10:30:21.954510   26826 cli_runner.go:164] Run: docker container inspect functional-599498 --format={{.State.Status}}
	I0929 10:30:21.964289   26826 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:30:21.967266   26826 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:30:21.967277   26826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:30:21.967344   26826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
	I0929 10:30:21.994330   26826 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:30:21.994342   26826 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:30:21.994401   26826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
	I0929 10:30:22.018839   26826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
	I0929 10:30:22.038455   26826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
	I0929 10:30:22.141091   26826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:30:22.172869   26826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:30:22.183109   26826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:30:23.096676   26826 node_ready.go:35] waiting up to 6m0s for node "functional-599498" to be "Ready" ...
	I0929 10:30:23.100021   26826 node_ready.go:49] node "functional-599498" is "Ready"
	I0929 10:30:23.100036   26826 node_ready.go:38] duration metric: took 3.344506ms for node "functional-599498" to be "Ready" ...
	I0929 10:30:23.100049   26826 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:30:23.100131   26826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:30:23.100216   26826 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0929 10:30:23.103175   26826 addons.go:514] duration metric: took 1.177610882s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0929 10:30:23.113182   26826 api_server.go:72] duration metric: took 1.187866846s to wait for apiserver process to appear ...
	I0929 10:30:23.113196   26826 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:30:23.113214   26826 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 10:30:23.124211   26826 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0929 10:30:23.130867   26826 api_server.go:141] control plane version: v1.34.0
	I0929 10:30:23.130884   26826 api_server.go:131] duration metric: took 17.682292ms to wait for apiserver health ...
	I0929 10:30:23.130897   26826 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:30:23.148951   26826 system_pods.go:59] 8 kube-system pods found
	I0929 10:30:23.148972   26826 system_pods.go:61] "coredns-66bc5c9577-ccwkf" [0f8e44fa-650f-41f6-8e26-c024052e1986] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:30:23.148980   26826 system_pods.go:61] "etcd-functional-599498" [4ac94e5f-c7e5-4e50-8aa6-323cc6ec2339] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:30:23.148985   26826 system_pods.go:61] "kindnet-s5dvx" [85b8e319-b1c6-47ad-bbc9-aa68a0a6c791] Running
	I0929 10:30:23.148991   26826 system_pods.go:61] "kube-apiserver-functional-599498" [02330d8e-efea-47b4-884e-000303502db6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 10:30:23.149008   26826 system_pods.go:61] "kube-controller-manager-functional-599498" [d469901a-1ded-4521-bb3d-124d4e72633f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:30:23.149012   26826 system_pods.go:61] "kube-proxy-2s84x" [efba638e-1a4f-400b-b7a1-32fbf390a219] Running
	I0929 10:30:23.149018   26826 system_pods.go:61] "kube-scheduler-functional-599498" [6af43ff9-35ae-423d-a268-298cc8b78d33] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:30:23.149021   26826 system_pods.go:61] "storage-provisioner" [1523dcbc-3074-4a83-b79d-31d0fc96a8b0] Running
	I0929 10:30:23.149026   26826 system_pods.go:74] duration metric: took 18.124275ms to wait for pod list to return data ...
	I0929 10:30:23.149033   26826 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:30:23.156481   26826 default_sa.go:45] found service account: "default"
	I0929 10:30:23.156494   26826 default_sa.go:55] duration metric: took 7.456047ms for default service account to be created ...
	I0929 10:30:23.156502   26826 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:30:23.245033   26826 system_pods.go:86] 8 kube-system pods found
	I0929 10:30:23.245053   26826 system_pods.go:89] "coredns-66bc5c9577-ccwkf" [0f8e44fa-650f-41f6-8e26-c024052e1986] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:30:23.245072   26826 system_pods.go:89] "etcd-functional-599498" [4ac94e5f-c7e5-4e50-8aa6-323cc6ec2339] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:30:23.245077   26826 system_pods.go:89] "kindnet-s5dvx" [85b8e319-b1c6-47ad-bbc9-aa68a0a6c791] Running
	I0929 10:30:23.245084   26826 system_pods.go:89] "kube-apiserver-functional-599498" [02330d8e-efea-47b4-884e-000303502db6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 10:30:23.245089   26826 system_pods.go:89] "kube-controller-manager-functional-599498" [d469901a-1ded-4521-bb3d-124d4e72633f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:30:23.245102   26826 system_pods.go:89] "kube-proxy-2s84x" [efba638e-1a4f-400b-b7a1-32fbf390a219] Running
	I0929 10:30:23.245108   26826 system_pods.go:89] "kube-scheduler-functional-599498" [6af43ff9-35ae-423d-a268-298cc8b78d33] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:30:23.245111   26826 system_pods.go:89] "storage-provisioner" [1523dcbc-3074-4a83-b79d-31d0fc96a8b0] Running
	I0929 10:30:23.245117   26826 system_pods.go:126] duration metric: took 88.609819ms to wait for k8s-apps to be running ...
	I0929 10:30:23.245124   26826 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:30:23.245188   26826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:30:23.258947   26826 system_svc.go:56] duration metric: took 13.813798ms WaitForService to wait for kubelet
	I0929 10:30:23.258964   26826 kubeadm.go:578] duration metric: took 1.333660271s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:30:23.258991   26826 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:30:23.266304   26826 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 10:30:23.266318   26826 node_conditions.go:123] node cpu capacity is 2
	I0929 10:30:23.266328   26826 node_conditions.go:105] duration metric: took 7.332689ms to run NodePressure ...
	I0929 10:30:23.266338   26826 start.go:241] waiting for startup goroutines ...
	I0929 10:30:23.266344   26826 start.go:246] waiting for cluster config update ...
	I0929 10:30:23.266354   26826 start.go:255] writing updated cluster config ...
	I0929 10:30:23.266666   26826 ssh_runner.go:195] Run: rm -f paused
	I0929 10:30:23.271497   26826 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:30:23.345385   26826 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ccwkf" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 10:30:25.351738   26826 pod_ready.go:104] pod "coredns-66bc5c9577-ccwkf" is not "Ready", error: <nil>
	W0929 10:30:27.851820   26826 pod_ready.go:104] pod "coredns-66bc5c9577-ccwkf" is not "Ready", error: <nil>
	I0929 10:30:29.350713   26826 pod_ready.go:94] pod "coredns-66bc5c9577-ccwkf" is "Ready"
	I0929 10:30:29.350727   26826 pod_ready.go:86] duration metric: took 6.005326927s for pod "coredns-66bc5c9577-ccwkf" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:30:29.353492   26826 pod_ready.go:83] waiting for pod "etcd-functional-599498" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:30:29.357794   26826 pod_ready.go:94] pod "etcd-functional-599498" is "Ready"
	I0929 10:30:29.357806   26826 pod_ready.go:86] duration metric: took 4.302449ms for pod "etcd-functional-599498" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:30:29.360387   26826 pod_ready.go:83] waiting for pod "kube-apiserver-functional-599498" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 10:30:31.366581   26826 pod_ready.go:104] pod "kube-apiserver-functional-599498" is not "Ready", error: <nil>
	I0929 10:30:31.866344   26826 pod_ready.go:94] pod "kube-apiserver-functional-599498" is "Ready"
	I0929 10:30:31.866358   26826 pod_ready.go:86] duration metric: took 2.505959311s for pod "kube-apiserver-functional-599498" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:30:31.868724   26826 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-599498" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:30:31.873367   26826 pod_ready.go:94] pod "kube-controller-manager-functional-599498" is "Ready"
	I0929 10:30:31.873381   26826 pod_ready.go:86] duration metric: took 4.645223ms for pod "kube-controller-manager-functional-599498" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:30:31.875927   26826 pod_ready.go:83] waiting for pod "kube-proxy-2s84x" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:30:32.149875   26826 pod_ready.go:94] pod "kube-proxy-2s84x" is "Ready"
	I0929 10:30:32.149888   26826 pod_ready.go:86] duration metric: took 273.950609ms for pod "kube-proxy-2s84x" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:30:32.349107   26826 pod_ready.go:83] waiting for pod "kube-scheduler-functional-599498" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:30:32.748213   26826 pod_ready.go:94] pod "kube-scheduler-functional-599498" is "Ready"
	I0929 10:30:32.748234   26826 pod_ready.go:86] duration metric: took 399.106522ms for pod "kube-scheduler-functional-599498" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:30:32.748245   26826 pod_ready.go:40] duration metric: took 9.47672633s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:30:32.804174   26826 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 10:30:32.807435   26826 out.go:179] * Done! kubectl is now configured to use "functional-599498" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 10:31:09 functional-599498 crio[4130]: time="2025-09-29 10:31:09.322535320Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-l5fh4 Namespace:default ID:6f63549f07871e3da7e7af04eb6d9ce52c9382e4ab16f642f6a46d096e7b1307 UID:f35465c0-4d57-4ea1-bb1f-c6902d4ee885 NetNS:/var/run/netns/3c181254-e2a6-44ef-820f-883599ea1462 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 10:31:09 functional-599498 crio[4130]: time="2025-09-29 10:31:09.322741296Z" level=info msg="Checking pod default_hello-node-75c85bcc94-l5fh4 for CNI network kindnet (type=ptp)"
	Sep 29 10:31:09 functional-599498 crio[4130]: time="2025-09-29 10:31:09.327216547Z" level=info msg="Ran pod sandbox 6f63549f07871e3da7e7af04eb6d9ce52c9382e4ab16f642f6a46d096e7b1307 with infra container: default/hello-node-75c85bcc94-l5fh4/POD" id=1c40cb2d-b679-4f22-93fe-e111bde1c2ca name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 10:31:09 functional-599498 crio[4130]: time="2025-09-29 10:31:09.328442241Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=05b70cfc-3904-475e-9a83-5a0f88730d90 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:31:14 functional-599498 crio[4130]: time="2025-09-29 10:31:14.992048404Z" level=info msg="Stopping pod sandbox: 75234024ad77c32638c115d228376a4bcefd76e675634e300bf6608690e18e5f" id=ee17b792-ae16-4b4c-ab46-d2bffc8161b4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:31:14 functional-599498 crio[4130]: time="2025-09-29 10:31:14.992091966Z" level=info msg="Stopped pod sandbox (already stopped): 75234024ad77c32638c115d228376a4bcefd76e675634e300bf6608690e18e5f" id=ee17b792-ae16-4b4c-ab46-d2bffc8161b4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:31:14 functional-599498 crio[4130]: time="2025-09-29 10:31:14.992934583Z" level=info msg="Removing pod sandbox: 75234024ad77c32638c115d228376a4bcefd76e675634e300bf6608690e18e5f" id=e4037930-000f-494f-ae24-d94125dbc6f6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:31:15 functional-599498 crio[4130]: time="2025-09-29 10:31:15.022365774Z" level=info msg="Removed pod sandbox: 75234024ad77c32638c115d228376a4bcefd76e675634e300bf6608690e18e5f" id=e4037930-000f-494f-ae24-d94125dbc6f6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:31:15 functional-599498 crio[4130]: time="2025-09-29 10:31:15.027275005Z" level=info msg="Stopping pod sandbox: aece92f6e95d0d58eaddc5ceaeb296c5a56bbc3ca57362513b7a867d09ecd2a5" id=45b5507a-cf98-4d5d-9ee9-9e86cd1a8300 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:31:15 functional-599498 crio[4130]: time="2025-09-29 10:31:15.027339277Z" level=info msg="Stopped pod sandbox (already stopped): aece92f6e95d0d58eaddc5ceaeb296c5a56bbc3ca57362513b7a867d09ecd2a5" id=45b5507a-cf98-4d5d-9ee9-9e86cd1a8300 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:31:15 functional-599498 crio[4130]: time="2025-09-29 10:31:15.027963143Z" level=info msg="Removing pod sandbox: aece92f6e95d0d58eaddc5ceaeb296c5a56bbc3ca57362513b7a867d09ecd2a5" id=4d4c9a6c-b55b-4e77-b571-c5a98c74db36 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:31:15 functional-599498 crio[4130]: time="2025-09-29 10:31:15.047187478Z" level=info msg="Removed pod sandbox: aece92f6e95d0d58eaddc5ceaeb296c5a56bbc3ca57362513b7a867d09ecd2a5" id=4d4c9a6c-b55b-4e77-b571-c5a98c74db36 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:31:15 functional-599498 crio[4130]: time="2025-09-29 10:31:15.048146289Z" level=info msg="Stopping pod sandbox: ef1639d085bc2c478e2bf56dee3dd6874b811a89a57f0cf90edaf08cc776be24" id=fe3a640f-9c74-44c8-b2f9-1e48118f12d2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:31:15 functional-599498 crio[4130]: time="2025-09-29 10:31:15.048197834Z" level=info msg="Stopped pod sandbox (already stopped): ef1639d085bc2c478e2bf56dee3dd6874b811a89a57f0cf90edaf08cc776be24" id=fe3a640f-9c74-44c8-b2f9-1e48118f12d2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:31:15 functional-599498 crio[4130]: time="2025-09-29 10:31:15.048857122Z" level=info msg="Removing pod sandbox: ef1639d085bc2c478e2bf56dee3dd6874b811a89a57f0cf90edaf08cc776be24" id=98b0875c-e1be-4ec5-92ad-836e8a678ca7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:31:15 functional-599498 crio[4130]: time="2025-09-29 10:31:15.058286805Z" level=info msg="Removed pod sandbox: ef1639d085bc2c478e2bf56dee3dd6874b811a89a57f0cf90edaf08cc776be24" id=98b0875c-e1be-4ec5-92ad-836e8a678ca7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:31:21 functional-599498 crio[4130]: time="2025-09-29 10:31:21.003422290Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=55db0cc3-a3cd-4d13-9de6-f2da0de75168 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:31:34 functional-599498 crio[4130]: time="2025-09-29 10:31:34.002698212Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=24323745-d873-4a4e-9dd9-59c875d6a840 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:31:47 functional-599498 crio[4130]: time="2025-09-29 10:31:47.002758924Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e320aff4-3c1c-4d75-b103-fa07bac5e1d1 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:32:27 functional-599498 crio[4130]: time="2025-09-29 10:32:27.001881582Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c6d6004a-574a-48a9-bc5f-ee97525a5e3d name=/runtime.v1.ImageService/PullImage
	Sep 29 10:32:38 functional-599498 crio[4130]: time="2025-09-29 10:32:38.002662161Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c244b46f-ec7d-4d96-b554-db678cfb2bdb name=/runtime.v1.ImageService/PullImage
	Sep 29 10:33:49 functional-599498 crio[4130]: time="2025-09-29 10:33:49.002509478Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d533fc52-04e3-4197-b8ab-ceb8bc07a9cc name=/runtime.v1.ImageService/PullImage
	Sep 29 10:34:11 functional-599498 crio[4130]: time="2025-09-29 10:34:11.004606524Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7e91cd99-62a8-45df-bf4a-d23b0ffda4e2 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:36:33 functional-599498 crio[4130]: time="2025-09-29 10:36:33.002732500Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=95a050e5-8d5d-41ac-8347-d4e49af8d77b name=/runtime.v1.ImageService/PullImage
	Sep 29 10:37:00 functional-599498 crio[4130]: time="2025-09-29 10:37:00.005221051Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ddd37987-7b68-40a1-8abf-48099ceb92f7 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4cc9da8161d5e       docker.io/library/nginx@sha256:059ceb5a1ded7032703d6290061911adc8a9c55298f372daaf63801600ec894e   9 minutes ago       Running             myfrontend                0                   c9b54e38aa7b0       sp-pod
	9a31b7de67dcc       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8   10 minutes ago      Running             nginx                     0                   dac6b491205b8       nginx-svc
	a5287bafeb9c6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   7de938d84e14b       kindnet-s5dvx
	986bfc25870c5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   6d1d7742a680e       coredns-66bc5c9577-ccwkf
	47c3b7d20a004       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                  10 minutes ago      Running             kube-proxy                2                   fab5fc42698c6       kube-proxy-2s84x
	5ed30902bc8ff       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   e7efb6f3d4f65       storage-provisioner
	72eb1a3057e31       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                  10 minutes ago      Running             kube-apiserver            0                   8223cdf0f21bf       kube-apiserver-functional-599498
	0c90d32d03615       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                  10 minutes ago      Running             kube-scheduler            2                   84927629cbbd7       kube-scheduler-functional-599498
	bf3e22828ed46       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                  10 minutes ago      Running             kube-controller-manager   2                   cb776ebe278bd       kube-controller-manager-functional-599498
	429f749eb8ae4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   85a98d57bbaa4       etcd-functional-599498
	d10742c389a99       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   85a98d57bbaa4       etcd-functional-599498
	86e2a9113dfe3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   7de938d84e14b       kindnet-s5dvx
	ca079d3703e61       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                  11 minutes ago      Exited              kube-controller-manager   1                   cb776ebe278bd       kube-controller-manager-functional-599498
	eeb2a88f9ecc5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   6d1d7742a680e       coredns-66bc5c9577-ccwkf
	0b9a8578f993c       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                  11 minutes ago      Exited              kube-proxy                1                   fab5fc42698c6       kube-proxy-2s84x
	5084e8697ad56       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                  11 minutes ago      Exited              kube-scheduler            1                   84927629cbbd7       kube-scheduler-functional-599498
	838807ead68af       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   e7efb6f3d4f65       storage-provisioner
	
	
	==> coredns [986bfc25870c57c2bf43581d39b64eb0d57ad4601db116a3781d660f691a646a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54860 - 7200 "HINFO IN 781821114919468414.4120799829445108629. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025251662s
	
	
	==> coredns [eeb2a88f9ecc54558ea729a540d216a517e179c70a24cc920d09a8efc384de08] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41982 - 47517 "HINFO IN 7208566881744070767.7644906541149103791. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032440027s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-599498
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-599498
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=functional-599498
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_28_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:28:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-599498
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:40:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:39:19 +0000   Mon, 29 Sep 2025 10:28:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:39:19 +0000   Mon, 29 Sep 2025 10:28:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:39:19 +0000   Mon, 29 Sep 2025 10:28:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:39:19 +0000   Mon, 29 Sep 2025 10:29:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-599498
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 efca2b56ea5d47bd959efbc15e799bc9
	  System UUID:                d05249c3-f906-45db-adec-29c93beafbff
	  Boot ID:                    94bae1c7-2aab-4023-97c8-d86f41852a19
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-l5fh4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  default                     hello-node-connect-7d85dfc575-jpkpp          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-ccwkf                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-599498                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-s5dvx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-599498             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-599498    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-2s84x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-599498             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-599498 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-599498 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-599498 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-599498 event: Registered Node functional-599498 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-599498 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-599498 event: Registered Node functional-599498 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-599498 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-599498 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-599498 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-599498 event: Registered Node functional-599498 in Controller
	
	
	==> dmesg <==
	[Sep29 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015081] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507046] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032504] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.738127] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.628888] kauditd_printk_skb: 36 callbacks suppressed
	[Sep29 10:24] hrtimer: interrupt took 16266417 ns
	
	
	==> etcd [429f749eb8ae4b4b32d54a7a24edf861c2fd559f70d53c7b1575d06b29f17c88] <==
	{"level":"warn","ts":"2025-09-29T10:30:18.352020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.368923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.399552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.419319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.432541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.489211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.495510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.517760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.548641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.555757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.570754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.587671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.610166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.619626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.650070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.655965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.683294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.718759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.747917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.762269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.779595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:30:18.832004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34158","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:40:17.295885Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1092}
	{"level":"info","ts":"2025-09-29T10:40:17.319762Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1092,"took":"23.452239ms","hash":235603243,"current-db-size-bytes":3158016,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1355776,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-29T10:40:17.319821Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":235603243,"revision":1092,"compact-revision":-1}
	
	
	==> etcd [d10742c389a99d3c544fb08a1b7e154933c3c2d79f3a55c8e0d87e8d3e4bd134] <==
	{"level":"warn","ts":"2025-09-29T10:29:36.989170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:29:37.007851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:29:37.027302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:29:37.060006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:29:37.084450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:29:37.100010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:29:37.200098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59024","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:30:03.384091Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T10:30:03.384154Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-599498","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T10:30:03.384244Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:30:03.523814Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:30:03.523940Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:30:03.523999Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T10:30:03.524080Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T10:30:03.524107Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T10:30:03.524174Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T10:30:03.524173Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:30:03.524200Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:30:03.524158Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:30:03.524216Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:30:03.524222Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:30:03.528054Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T10:30:03.528171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:30:03.528202Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T10:30:03.528214Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-599498","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:40:55 up 23 min,  0 users,  load average: 0.01, 0.23, 0.49
	Linux functional-599498 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [86e2a9113dfe36181221a569feda477c7ca1a5262a39f8b0e50d96bce1862b14] <==
	I0929 10:29:34.177973       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 10:29:34.179652       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 10:29:34.180998       1 main.go:148] setting mtu 1500 for CNI 
	I0929 10:29:34.181233       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 10:29:34.181312       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T10:29:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 10:29:34.340204       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 10:29:34.340435       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 10:29:34.340508       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 10:29:34.341745       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 10:29:38.648891       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 10:29:38.649054       1 metrics.go:72] Registering metrics
	I0929 10:29:38.649157       1 controller.go:711] "Syncing nftables rules"
	I0929 10:29:44.330501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:29:44.330589       1 main.go:301] handling current node
	I0929 10:29:54.330465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:29:54.330520       1 main.go:301] handling current node
	
	
	==> kindnet [a5287bafeb9c65a969214e157bd57bda6fa509e41ef9f32ba1f0c71f114a14b8] <==
	I0929 10:38:50.809445       1 main.go:301] handling current node
	I0929 10:39:00.804363       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:39:00.804504       1 main.go:301] handling current node
	I0929 10:39:10.804581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:39:10.804693       1 main.go:301] handling current node
	I0929 10:39:20.803692       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:39:20.803813       1 main.go:301] handling current node
	I0929 10:39:30.804674       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:39:30.804707       1 main.go:301] handling current node
	I0929 10:39:40.804676       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:39:40.804708       1 main.go:301] handling current node
	I0929 10:39:50.804426       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:39:50.804459       1 main.go:301] handling current node
	I0929 10:40:00.811040       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:40:00.811078       1 main.go:301] handling current node
	I0929 10:40:10.812177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:40:10.812212       1 main.go:301] handling current node
	I0929 10:40:20.811889       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:40:20.812002       1 main.go:301] handling current node
	I0929 10:40:30.804506       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:40:30.804540       1 main.go:301] handling current node
	I0929 10:40:40.804678       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:40:40.804714       1 main.go:301] handling current node
	I0929 10:40:50.803789       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:40:50.803822       1 main.go:301] handling current node
	
	
	==> kube-apiserver [72eb1a3057e311c6c99cb0b2f145b69697cee601b8af83a43b8eb97e0bc26506] <==
	I0929 10:30:23.146082       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 10:30:23.176369       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 10:30:23.277462       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 10:30:36.729942       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.93.197"}
	I0929 10:30:43.447432       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.53.6"}
	I0929 10:30:53.293575       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.83.4"}
	E0929 10:31:00.517923       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:48556: use of closed network connection
	I0929 10:31:08.765084       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.157.24"}
	I0929 10:31:23.529450       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:31:25.390709       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:32:43.654157       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:32:49.484169       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:34:01.481401       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:34:11.337817       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:35:08.595299       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:35:25.455330       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:36:31.442332       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:36:31.500113       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:37:34.866938       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:37:39.086492       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:38:37.765529       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:38:43.586591       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:39:45.430438       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:39:52.630868       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:40:19.540520       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [bf3e22828ed46d91f6972d9ea30c1d68ed91ad0816dd427d108087328c86b39a] <==
	I0929 10:30:22.987793       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 10:30:22.987960       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 10:30:22.988051       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 10:30:22.971456       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 10:30:22.988121       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 10:30:22.988232       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 10:30:22.988328       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:30:22.988375       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:30:22.988345       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 10:30:23.000717       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 10:30:23.000825       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 10:30:23.001186       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 10:30:23.006042       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 10:30:23.006139       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 10:30:23.006181       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 10:30:23.006186       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 10:30:23.006192       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 10:30:23.011443       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 10:30:23.014317       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:30:23.017546       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 10:30:23.021434       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 10:30:23.103384       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:30:23.124393       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:30:23.124422       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:30:23.124430       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ca079d3703e61815b070e3e6d6b58b658edd64f359b88a1dead64739531ba853] <==
	I0929 10:29:41.815216       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 10:29:41.816024       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 10:29:41.816365       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 10:29:41.817557       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 10:29:41.824119       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 10:29:41.826376       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 10:29:41.829559       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 10:29:41.830763       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 10:29:41.831941       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 10:29:41.834076       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 10:29:41.836326       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:29:41.849709       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:29:41.853841       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 10:29:41.853906       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 10:29:41.853927       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 10:29:41.853932       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 10:29:41.853938       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 10:29:41.856541       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 10:29:41.859652       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 10:29:41.863437       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 10:29:41.864302       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 10:29:41.864978       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 10:29:41.864979       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 10:29:41.870325       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:29:41.874446       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [0b9a8578f993c661aa89e493d37ab5870635e9842c264ade78908674b096cfc3] <==
	I0929 10:29:38.334252       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:29:38.797132       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:29:39.027504       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:29:39.027932       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:29:39.030194       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:29:39.544946       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:29:39.545084       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:29:39.594076       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:29:39.594448       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:29:39.594518       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:29:39.597472       1 config.go:200] "Starting service config controller"
	I0929 10:29:39.597564       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:29:39.597610       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:29:39.597655       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:29:39.597694       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:29:39.597731       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:29:39.598389       1 config.go:309] "Starting node config controller"
	I0929 10:29:39.598452       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:29:39.598482       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:29:39.698878       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:29:39.699769       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:29:39.699811       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [47c3b7d20a004617d3a0ed7d445edad847cc33cd1306406fd119589fa305b5c5] <==
	I0929 10:30:20.655246       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:30:20.780973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:30:20.881450       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:30:20.881487       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:30:20.881591       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:30:20.920940       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:30:20.921060       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:30:20.925452       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:30:20.925866       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:30:20.926096       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:30:20.927705       1 config.go:200] "Starting service config controller"
	I0929 10:30:20.927766       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:30:20.927811       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:30:20.927839       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:30:20.928066       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:30:20.928108       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:30:20.930298       1 config.go:309] "Starting node config controller"
	I0929 10:30:20.931531       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:30:20.931591       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:30:21.028876       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:30:21.028896       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:30:21.028921       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0c90d32d03615369e84ac04c07ecd7f23c31c24e0cd1debda01382b537a2a2b2] <==
	I0929 10:30:17.770838       1 serving.go:386] Generated self-signed cert in-memory
	W0929 10:30:19.463715       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 10:30:19.463820       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 10:30:19.463856       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 10:30:19.463889       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 10:30:19.559360       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:30:19.559474       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:30:19.569699       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:30:19.569797       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:30:19.570694       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:30:19.570772       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:30:19.671273       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [5084e8697ad568cd5b744581c7dc5eb861e245c4f3ada3eee7bea2ea5b94d734] <==
	I0929 10:29:37.101048       1 serving.go:386] Generated self-signed cert in-memory
	I0929 10:29:39.919588       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:29:39.919619       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:29:39.925610       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:29:39.925715       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 10:29:39.925788       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 10:29:39.925818       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:29:39.928411       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:29:39.928453       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:29:39.928475       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:29:39.928481       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:29:40.026635       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 10:29:40.029117       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:29:40.029146       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:30:03.384919       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 10:30:03.384943       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 10:30:03.384961       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 10:30:03.384989       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:30:03.385009       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:30:03.385027       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0929 10:30:03.385291       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 10:30:03.385316       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 10:40:15 functional-599498 kubelet[4466]: E0929 10:40:15.152880    4466 manager.go:1116] Failed to create existing container: /docker/e69f16173a200ca050d527d46e68704db9071ae8aeb9fc374319413b455bc2a2/crio-7de938d84e14b7f838d8cc4f408a75a95a465ac436d9f2de009dcea68cfd0ec5: Error finding container 7de938d84e14b7f838d8cc4f408a75a95a465ac436d9f2de009dcea68cfd0ec5: Status 404 returned error can't find the container with id 7de938d84e14b7f838d8cc4f408a75a95a465ac436d9f2de009dcea68cfd0ec5
	Sep 29 10:40:15 functional-599498 kubelet[4466]: E0929 10:40:15.153060    4466 manager.go:1116] Failed to create existing container: /crio-84927629cbbd739d785bf714f72f4611eaa887629c31c8e49ecd7d307b791d6a: Error finding container 84927629cbbd739d785bf714f72f4611eaa887629c31c8e49ecd7d307b791d6a: Status 404 returned error can't find the container with id 84927629cbbd739d785bf714f72f4611eaa887629c31c8e49ecd7d307b791d6a
	Sep 29 10:40:15 functional-599498 kubelet[4466]: E0929 10:40:15.153242    4466 manager.go:1116] Failed to create existing container: /crio-ef1639d085bc2c478e2bf56dee3dd6874b811a89a57f0cf90edaf08cc776be24: Error finding container ef1639d085bc2c478e2bf56dee3dd6874b811a89a57f0cf90edaf08cc776be24: Status 404 returned error can't find the container with id ef1639d085bc2c478e2bf56dee3dd6874b811a89a57f0cf90edaf08cc776be24
	Sep 29 10:40:15 functional-599498 kubelet[4466]: E0929 10:40:15.153462    4466 manager.go:1116] Failed to create existing container: /docker/e69f16173a200ca050d527d46e68704db9071ae8aeb9fc374319413b455bc2a2/crio-84927629cbbd739d785bf714f72f4611eaa887629c31c8e49ecd7d307b791d6a: Error finding container 84927629cbbd739d785bf714f72f4611eaa887629c31c8e49ecd7d307b791d6a: Status 404 returned error can't find the container with id 84927629cbbd739d785bf714f72f4611eaa887629c31c8e49ecd7d307b791d6a
	Sep 29 10:40:15 functional-599498 kubelet[4466]: E0929 10:40:15.153691    4466 manager.go:1116] Failed to create existing container: /docker/e69f16173a200ca050d527d46e68704db9071ae8aeb9fc374319413b455bc2a2/crio-cb776ebe278bd1ab9207f7183f81cc8770081965b9fcb34cef6b5374bca8ce21: Error finding container cb776ebe278bd1ab9207f7183f81cc8770081965b9fcb34cef6b5374bca8ce21: Status 404 returned error can't find the container with id cb776ebe278bd1ab9207f7183f81cc8770081965b9fcb34cef6b5374bca8ce21
	Sep 29 10:40:15 functional-599498 kubelet[4466]: E0929 10:40:15.154031    4466 manager.go:1116] Failed to create existing container: /docker/e69f16173a200ca050d527d46e68704db9071ae8aeb9fc374319413b455bc2a2/crio-fab5fc42698c6145ccaf1bc6ffd301454e89fdc2ddb6ba9aaaef5fb1a7477675: Error finding container fab5fc42698c6145ccaf1bc6ffd301454e89fdc2ddb6ba9aaaef5fb1a7477675: Status 404 returned error can't find the container with id fab5fc42698c6145ccaf1bc6ffd301454e89fdc2ddb6ba9aaaef5fb1a7477675
	Sep 29 10:40:15 functional-599498 kubelet[4466]: E0929 10:40:15.154304    4466 manager.go:1116] Failed to create existing container: /crio-aece92f6e95d0d58eaddc5ceaeb296c5a56bbc3ca57362513b7a867d09ecd2a5: Error finding container aece92f6e95d0d58eaddc5ceaeb296c5a56bbc3ca57362513b7a867d09ecd2a5: Status 404 returned error can't find the container with id aece92f6e95d0d58eaddc5ceaeb296c5a56bbc3ca57362513b7a867d09ecd2a5
	Sep 29 10:40:15 functional-599498 kubelet[4466]: E0929 10:40:15.154504    4466 manager.go:1116] Failed to create existing container: /crio-cb776ebe278bd1ab9207f7183f81cc8770081965b9fcb34cef6b5374bca8ce21: Error finding container cb776ebe278bd1ab9207f7183f81cc8770081965b9fcb34cef6b5374bca8ce21: Status 404 returned error can't find the container with id cb776ebe278bd1ab9207f7183f81cc8770081965b9fcb34cef6b5374bca8ce21
	Sep 29 10:40:15 functional-599498 kubelet[4466]: E0929 10:40:15.290293    4466 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142415289972757 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218754} inodes_used:{value:89}}"
	Sep 29 10:40:15 functional-599498 kubelet[4466]: E0929 10:40:15.290331    4466 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142415289972757 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218754} inodes_used:{value:89}}"
	Sep 29 10:40:17 functional-599498 kubelet[4466]: E0929 10:40:17.003345    4466 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-l5fh4" podUID="f35465c0-4d57-4ea1-bb1f-c6902d4ee885"
	Sep 29 10:40:23 functional-599498 kubelet[4466]: E0929 10:40:23.002903    4466 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-jpkpp" podUID="421cb5b8-ffc4-4787-989a-f04a874d3cc9"
	Sep 29 10:40:25 functional-599498 kubelet[4466]: E0929 10:40:25.292628    4466 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142425292382705 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218754} inodes_used:{value:89}}"
	Sep 29 10:40:25 functional-599498 kubelet[4466]: E0929 10:40:25.292664    4466 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142425292382705 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218754} inodes_used:{value:89}}"
	Sep 29 10:40:28 functional-599498 kubelet[4466]: E0929 10:40:28.002226    4466 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-l5fh4" podUID="f35465c0-4d57-4ea1-bb1f-c6902d4ee885"
	Sep 29 10:40:35 functional-599498 kubelet[4466]: E0929 10:40:35.001826    4466 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-jpkpp" podUID="421cb5b8-ffc4-4787-989a-f04a874d3cc9"
	Sep 29 10:40:35 functional-599498 kubelet[4466]: E0929 10:40:35.294799    4466 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142435294538637 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218754} inodes_used:{value:89}}"
	Sep 29 10:40:35 functional-599498 kubelet[4466]: E0929 10:40:35.294838    4466 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142435294538637 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218754} inodes_used:{value:89}}"
	Sep 29 10:40:41 functional-599498 kubelet[4466]: E0929 10:40:41.001357    4466 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-l5fh4" podUID="f35465c0-4d57-4ea1-bb1f-c6902d4ee885"
	Sep 29 10:40:45 functional-599498 kubelet[4466]: E0929 10:40:45.297783    4466 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142445297349715 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218754} inodes_used:{value:89}}"
	Sep 29 10:40:45 functional-599498 kubelet[4466]: E0929 10:40:45.297830    4466 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142445297349715 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218754} inodes_used:{value:89}}"
	Sep 29 10:40:48 functional-599498 kubelet[4466]: E0929 10:40:48.006003    4466 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-jpkpp" podUID="421cb5b8-ffc4-4787-989a-f04a874d3cc9"
	Sep 29 10:40:52 functional-599498 kubelet[4466]: E0929 10:40:52.001623    4466 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-l5fh4" podUID="f35465c0-4d57-4ea1-bb1f-c6902d4ee885"
	Sep 29 10:40:55 functional-599498 kubelet[4466]: E0929 10:40:55.303098    4466 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142455301929498 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218754} inodes_used:{value:89}}"
	Sep 29 10:40:55 functional-599498 kubelet[4466]: E0929 10:40:55.303254    4466 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142455301929498 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:218754} inodes_used:{value:89}}"
	
	
	==> storage-provisioner [5ed30902bc8ffbad2e63383f7cce96c6aa30d9013737e693289d1354b7b9628c] <==
	W0929 10:40:30.837965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:32.841348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:32.846893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:34.850148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:34.857017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:36.860006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:36.864531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:38.867214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:38.871538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:40.874115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:40.878393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:42.881604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:42.888093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:44.891431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:44.895672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:46.898588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:46.902690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:48.905655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:48.912349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:50.916779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:50.921209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:52.924292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:52.928730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:54.931901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:40:54.943714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [838807ead68afd255e408c2a35f0ba8197a5f05dbbdd74cae27040599dddd968] <==
	I0929 10:29:34.902467       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 10:29:38.637322       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 10:29:38.639178       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 10:29:38.655379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:42.110303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:46.370916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:49.969636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:53.023154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:56.045020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:56.050108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:29:56.050268       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 10:29:56.050453       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-599498_6b758e4c-d70c-4ba8-a762-cebe757dd676!
	W0929 10:29:56.054059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:29:56.055173       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"84f19034-4aa7-449e-adcc-23a98bf5fb0a", APIVersion:"v1", ResourceVersion:"531", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-599498_6b758e4c-d70c-4ba8-a762-cebe757dd676 became leader
	W0929 10:29:56.061272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:29:56.151438       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-599498_6b758e4c-d70c-4ba8-a762-cebe757dd676!
	W0929 10:29:58.064747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:58.070105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:30:00.097615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:30:00.144920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:30:02.149161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:30:02.156471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-599498 -n functional-599498
helpers_test.go:269: (dbg) Run:  kubectl --context functional-599498 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-l5fh4 hello-node-connect-7d85dfc575-jpkpp
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-599498 describe pod hello-node-75c85bcc94-l5fh4 hello-node-connect-7d85dfc575-jpkpp
helpers_test.go:290: (dbg) kubectl --context functional-599498 describe pod hello-node-75c85bcc94-l5fh4 hello-node-connect-7d85dfc575-jpkpp:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-l5fh4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-599498/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:31:08 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5wwvn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5wwvn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m49s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-l5fh4 to functional-599498
	  Normal   Pulling    6m46s (x5 over 9m48s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m46s (x5 over 9m48s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m46s (x5 over 9m48s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m46s (x20 over 9m47s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m32s (x21 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-jpkpp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-599498/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:30:53 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7bpkt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7bpkt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-jpkpp to functional-599498
	  Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m51s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m51s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (604.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (601.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-599498 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-599498 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-l5fh4" [f35465c0-4d57-4ea1-bb1f-c6902d4ee885] Pending
helpers_test.go:352: "hello-node-75c85bcc94-l5fh4" [f35465c0-4d57-4ea1-bb1f-c6902d4ee885] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0929 10:33:10.545321    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:33:38.253701    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:38:10.545126    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-599498 -n functional-599498
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-29 10:41:09.192932778 +0000 UTC m=+1273.534441102
functional_test.go:1460: (dbg) Run:  kubectl --context functional-599498 describe po hello-node-75c85bcc94-l5fh4 -n default
functional_test.go:1460: (dbg) kubectl --context functional-599498 describe po hello-node-75c85bcc94-l5fh4 -n default:
Name:             hello-node-75c85bcc94-l5fh4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-599498/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:31:08 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5wwvn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-5wwvn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-l5fh4 to functional-599498
Normal   Pulling    6m58s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m58s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-599498 logs hello-node-75c85bcc94-l5fh4 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-599498 logs hello-node-75c85bcc94-l5fh4 -n default: exit status 1 (277.062715ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-l5fh4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-599498 logs hello-node-75c85bcc94-l5fh4 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (601.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 service --namespace=default --https --url hello-node: exit status 115 (528.435831ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31694
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-599498 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 service hello-node --url --format={{.IP}}: exit status 115 (516.59717ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-599498 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 service hello-node --url: exit status 115 (550.728407ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31694
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-599498 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31694
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (920.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0929 11:25:40.056949    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:25:42.963451    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p calico-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: exit status 80 (15m20.479617306s)

                                                
                                                
-- stdout --
	* [calico-163439] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-163439" primary control-plane node in "calico-163439" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:25:26.113764  230827 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:25:26.119959  230827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:25:26.119981  230827 out.go:374] Setting ErrFile to fd 2...
	I0929 11:25:26.119987  230827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:25:26.120318  230827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	I0929 11:25:26.120790  230827 out.go:368] Setting JSON to false
	I0929 11:25:26.121726  230827 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4076,"bootTime":1759141051,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0929 11:25:26.121800  230827 start.go:140] virtualization:  
	I0929 11:25:26.125509  230827 out.go:179] * [calico-163439] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 11:25:26.129742  230827 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 11:25:26.129787  230827 notify.go:220] Checking for updates...
	I0929 11:25:26.136274  230827 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:25:26.139318  230827 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	I0929 11:25:26.142740  230827 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	I0929 11:25:26.145618  230827 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 11:25:26.148636  230827 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:25:26.152083  230827 config.go:182] Loaded profile config "kindnet-163439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:25:26.152203  230827 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:25:26.204602  230827 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 11:25:26.204737  230827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:25:26.267799  230827 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 11:25:26.256691577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:25:26.267914  230827 docker.go:318] overlay module found
	I0929 11:25:26.272095  230827 out.go:179] * Using the docker driver based on user configuration
	I0929 11:25:26.274893  230827 start.go:304] selected driver: docker
	I0929 11:25:26.274913  230827 start.go:924] validating driver "docker" against <nil>
	I0929 11:25:26.274935  230827 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:25:26.275662  230827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:25:26.363541  230827 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 11:25:26.353722678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:25:26.363697  230827 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:25:26.363927  230827 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:25:26.367061  230827 out.go:179] * Using Docker driver with root privileges
	I0929 11:25:26.370055  230827 cni.go:84] Creating CNI manager for "calico"
	I0929 11:25:26.370086  230827 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0929 11:25:26.370178  230827 start.go:348] cluster config:
	{Name:calico-163439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-163439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0929 11:25:26.375117  230827 out.go:179] * Starting "calico-163439" primary control-plane node in "calico-163439" cluster
	I0929 11:25:26.378053  230827 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 11:25:26.381108  230827 out.go:179] * Pulling base image v0.0.48 ...
	I0929 11:25:26.384025  230827 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:25:26.384084  230827 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0929 11:25:26.384094  230827 cache.go:58] Caching tarball of preloaded images
	I0929 11:25:26.384129  230827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 11:25:26.384178  230827 preload.go:172] Found /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0929 11:25:26.384192  230827 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 11:25:26.384303  230827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/config.json ...
	I0929 11:25:26.384322  230827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/config.json: {Name:mkd33a2f822a7c3799d5e01143a206a5b91eff74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:25:26.406988  230827 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 11:25:26.407015  230827 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 11:25:26.407035  230827 cache.go:232] Successfully downloaded all kic artifacts
	I0929 11:25:26.407066  230827 start.go:360] acquireMachinesLock for calico-163439: {Name:mk279b874d683111286fe737c91c3f1fb4b7f25a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:25:26.407221  230827 start.go:364] duration metric: took 136.435µs to acquireMachinesLock for "calico-163439"
	I0929 11:25:26.407264  230827 start.go:93] Provisioning new machine with config: &{Name:calico-163439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-163439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:25:26.407337  230827 start.go:125] createHost starting for "" (driver="docker")
	I0929 11:25:26.412625  230827 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 11:25:26.412870  230827 start.go:159] libmachine.API.Create for "calico-163439" (driver="docker")
	I0929 11:25:26.412906  230827 client.go:168] LocalClient.Create starting
	I0929 11:25:26.412985  230827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem
	I0929 11:25:26.413024  230827 main.go:141] libmachine: Decoding PEM data...
	I0929 11:25:26.413045  230827 main.go:141] libmachine: Parsing certificate...
	I0929 11:25:26.413105  230827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21657-2306/.minikube/certs/cert.pem
	I0929 11:25:26.413133  230827 main.go:141] libmachine: Decoding PEM data...
	I0929 11:25:26.413151  230827 main.go:141] libmachine: Parsing certificate...
	I0929 11:25:26.413515  230827 cli_runner.go:164] Run: docker network inspect calico-163439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 11:25:26.429674  230827 cli_runner.go:211] docker network inspect calico-163439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 11:25:26.429757  230827 network_create.go:284] running [docker network inspect calico-163439] to gather additional debugging logs...
	I0929 11:25:26.429773  230827 cli_runner.go:164] Run: docker network inspect calico-163439
	W0929 11:25:26.450092  230827 cli_runner.go:211] docker network inspect calico-163439 returned with exit code 1
	I0929 11:25:26.450125  230827 network_create.go:287] error running [docker network inspect calico-163439]: docker network inspect calico-163439: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-163439 not found
	I0929 11:25:26.450141  230827 network_create.go:289] output of [docker network inspect calico-163439]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-163439 not found
	
	** /stderr **
	I0929 11:25:26.450241  230827 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 11:25:26.479091  230827 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67aad7d52d6a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:8f:bc:2b:fd:58} reservation:<nil>}
	I0929 11:25:26.479654  230827 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-07edaadbb8c1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:78:39:7c:7a:e5} reservation:<nil>}
	I0929 11:25:26.481215  230827 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e1731df24827 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:2c:4e:23:8e:44} reservation:<nil>}
	I0929 11:25:26.481696  230827 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400185bbd0}
	I0929 11:25:26.481717  230827 network_create.go:124] attempt to create docker network calico-163439 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0929 11:25:26.481778  230827 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-163439 calico-163439
	I0929 11:25:26.560365  230827 network_create.go:108] docker network calico-163439 192.168.76.0/24 created
	I0929 11:25:26.560400  230827 kic.go:121] calculated static IP "192.168.76.2" for the "calico-163439" container
	I0929 11:25:26.560487  230827 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 11:25:26.587347  230827 cli_runner.go:164] Run: docker volume create calico-163439 --label name.minikube.sigs.k8s.io=calico-163439 --label created_by.minikube.sigs.k8s.io=true
	I0929 11:25:26.608377  230827 oci.go:103] Successfully created a docker volume calico-163439
	I0929 11:25:26.608467  230827 cli_runner.go:164] Run: docker run --rm --name calico-163439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-163439 --entrypoint /usr/bin/test -v calico-163439:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 11:25:27.279341  230827 oci.go:107] Successfully prepared a docker volume calico-163439
	I0929 11:25:27.279383  230827 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:25:27.279415  230827 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 11:25:27.279483  230827 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v calico-163439:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 11:25:31.959215  230827 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v calico-163439:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.679695926s)
	I0929 11:25:31.959242  230827 kic.go:203] duration metric: took 4.679824043s to extract preloaded images to volume ...
	W0929 11:25:31.959369  230827 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 11:25:31.959482  230827 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 11:25:32.065642  230827 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-163439 --name calico-163439 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-163439 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-163439 --network calico-163439 --ip 192.168.76.2 --volume calico-163439:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 11:25:32.488687  230827 cli_runner.go:164] Run: docker container inspect calico-163439 --format={{.State.Running}}
	I0929 11:25:32.512668  230827 cli_runner.go:164] Run: docker container inspect calico-163439 --format={{.State.Status}}
	I0929 11:25:32.539819  230827 cli_runner.go:164] Run: docker exec calico-163439 stat /var/lib/dpkg/alternatives/iptables
	I0929 11:25:32.615194  230827 oci.go:144] the created container "calico-163439" has a running status.
	I0929 11:25:32.615225  230827 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21657-2306/.minikube/machines/calico-163439/id_rsa...
	I0929 11:25:32.765853  230827 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21657-2306/.minikube/machines/calico-163439/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 11:25:32.796038  230827 cli_runner.go:164] Run: docker container inspect calico-163439 --format={{.State.Status}}
	I0929 11:25:32.825897  230827 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 11:25:32.825942  230827 kic_runner.go:114] Args: [docker exec --privileged calico-163439 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 11:25:32.887417  230827 cli_runner.go:164] Run: docker container inspect calico-163439 --format={{.State.Status}}
	I0929 11:25:32.912363  230827 machine.go:93] provisionDockerMachine start ...
	I0929 11:25:32.912446  230827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-163439
	I0929 11:25:32.937213  230827 main.go:141] libmachine: Using SSH client type: native
	I0929 11:25:32.937555  230827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0929 11:25:32.937564  230827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 11:25:32.938477  230827 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0929 11:25:36.090702  230827 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-163439
	
	I0929 11:25:36.090727  230827 ubuntu.go:182] provisioning hostname "calico-163439"
	I0929 11:25:36.090881  230827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-163439
	I0929 11:25:36.111205  230827 main.go:141] libmachine: Using SSH client type: native
	I0929 11:25:36.111529  230827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0929 11:25:36.111546  230827 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-163439 && echo "calico-163439" | sudo tee /etc/hostname
	I0929 11:25:36.270285  230827 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-163439
	
	I0929 11:25:36.270360  230827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-163439
	I0929 11:25:36.289270  230827 main.go:141] libmachine: Using SSH client type: native
	I0929 11:25:36.289577  230827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0929 11:25:36.289593  230827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-163439' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-163439/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-163439' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:25:36.431121  230827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:25:36.431211  230827 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21657-2306/.minikube CaCertPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21657-2306/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21657-2306/.minikube}
	I0929 11:25:36.431240  230827 ubuntu.go:190] setting up certificates
	I0929 11:25:36.431260  230827 provision.go:84] configureAuth start
	I0929 11:25:36.431327  230827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-163439
	I0929 11:25:36.449056  230827 provision.go:143] copyHostCerts
	I0929 11:25:36.449147  230827 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-2306/.minikube/ca.pem, removing ...
	I0929 11:25:36.449197  230827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-2306/.minikube/ca.pem
	I0929 11:25:36.449289  230827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21657-2306/.minikube/ca.pem (1082 bytes)
	I0929 11:25:36.449396  230827 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-2306/.minikube/cert.pem, removing ...
	I0929 11:25:36.449402  230827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-2306/.minikube/cert.pem
	I0929 11:25:36.449459  230827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21657-2306/.minikube/cert.pem (1123 bytes)
	I0929 11:25:36.449581  230827 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-2306/.minikube/key.pem, removing ...
	I0929 11:25:36.449596  230827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-2306/.minikube/key.pem
	I0929 11:25:36.449630  230827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21657-2306/.minikube/key.pem (1679 bytes)
	I0929 11:25:36.449687  230827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21657-2306/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca-key.pem org=jenkins.calico-163439 san=[127.0.0.1 192.168.76.2 calico-163439 localhost minikube]
	I0929 11:25:36.648361  230827 provision.go:177] copyRemoteCerts
	I0929 11:25:36.648433  230827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:25:36.648472  230827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-163439
	I0929 11:25:36.670562  230827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/calico-163439/id_rsa Username:docker}
	I0929 11:25:36.772462  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:25:36.798502  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 11:25:36.825243  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 11:25:36.851087  230827 provision.go:87] duration metric: took 419.805066ms to configureAuth
	I0929 11:25:36.851267  230827 ubuntu.go:206] setting minikube options for container-runtime
	I0929 11:25:36.851499  230827 config.go:182] Loaded profile config "calico-163439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:25:36.851622  230827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-163439
	I0929 11:25:36.870320  230827 main.go:141] libmachine: Using SSH client type: native
	I0929 11:25:36.870629  230827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0929 11:25:36.870648  230827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:25:37.128294  230827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:25:37.128318  230827 machine.go:96] duration metric: took 4.215936003s to provisionDockerMachine
	I0929 11:25:37.128328  230827 client.go:171] duration metric: took 10.715415577s to LocalClient.Create
	I0929 11:25:37.128354  230827 start.go:167] duration metric: took 10.715490194s to libmachine.API.Create "calico-163439"
	I0929 11:25:37.128365  230827 start.go:293] postStartSetup for "calico-163439" (driver="docker")
	I0929 11:25:37.128375  230827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:25:37.128443  230827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:25:37.128488  230827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-163439
	I0929 11:25:37.146711  230827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/calico-163439/id_rsa Username:docker}
	I0929 11:25:37.256654  230827 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:25:37.260026  230827 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 11:25:37.260109  230827 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 11:25:37.260135  230827 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 11:25:37.260163  230827 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 11:25:37.260190  230827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-2306/.minikube/addons for local assets ...
	I0929 11:25:37.260253  230827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-2306/.minikube/files for local assets ...
	I0929 11:25:37.260346  230827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21657-2306/.minikube/files/etc/ssl/certs/41082.pem -> 41082.pem in /etc/ssl/certs
	I0929 11:25:37.260464  230827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 11:25:37.270176  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/files/etc/ssl/certs/41082.pem --> /etc/ssl/certs/41082.pem (1708 bytes)
	I0929 11:25:37.295010  230827 start.go:296] duration metric: took 166.630524ms for postStartSetup
	I0929 11:25:37.295492  230827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-163439
	I0929 11:25:37.312769  230827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/config.json ...
	I0929 11:25:37.313055  230827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:25:37.313105  230827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-163439
	I0929 11:25:37.329878  230827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/calico-163439/id_rsa Username:docker}
	I0929 11:25:37.424073  230827 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 11:25:37.428647  230827 start.go:128] duration metric: took 11.021293816s to createHost
	I0929 11:25:37.428675  230827 start.go:83] releasing machines lock for "calico-163439", held for 11.02143694s
	I0929 11:25:37.428747  230827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-163439
	I0929 11:25:37.445873  230827 ssh_runner.go:195] Run: cat /version.json
	I0929 11:25:37.445934  230827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-163439
	I0929 11:25:37.446196  230827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:25:37.446250  230827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-163439
	I0929 11:25:37.468611  230827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/calico-163439/id_rsa Username:docker}
	I0929 11:25:37.480583  230827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/calico-163439/id_rsa Username:docker}
	I0929 11:25:37.562658  230827 ssh_runner.go:195] Run: systemctl --version
	I0929 11:25:37.694282  230827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:25:37.840896  230827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 11:25:37.845362  230827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:25:37.867769  230827 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 11:25:37.867859  230827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:25:37.904321  230827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 11:25:37.904343  230827 start.go:495] detecting cgroup driver to use...
	I0929 11:25:37.904375  230827 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 11:25:37.904424  230827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:25:37.921649  230827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:25:37.933889  230827 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:25:37.933984  230827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:25:37.947952  230827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:25:37.962275  230827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:25:38.064647  230827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:25:38.174770  230827 docker.go:234] disabling docker service ...
	I0929 11:25:38.174900  230827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:25:38.204785  230827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:25:38.217392  230827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:25:38.308294  230827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:25:38.416280  230827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:25:38.428307  230827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:25:38.445639  230827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 11:25:38.445707  230827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:25:38.455797  230827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 11:25:38.455871  230827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:25:38.466723  230827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:25:38.476865  230827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:25:38.486950  230827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:25:38.497335  230827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:25:38.507954  230827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:25:38.525517  230827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:25:38.535466  230827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:25:38.544258  230827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:25:38.553138  230827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:25:38.641379  230827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:25:38.754322  230827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:25:38.754400  230827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:25:38.757965  230827 start.go:563] Will wait 60s for crictl version
	I0929 11:25:38.758042  230827 ssh_runner.go:195] Run: which crictl
	I0929 11:25:38.762116  230827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:25:38.807046  230827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 11:25:38.807220  230827 ssh_runner.go:195] Run: crio --version
	I0929 11:25:38.850429  230827 ssh_runner.go:195] Run: crio --version
	I0929 11:25:38.896640  230827 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 11:25:38.899526  230827 cli_runner.go:164] Run: docker network inspect calico-163439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 11:25:38.918312  230827 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 11:25:38.922326  230827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:25:38.933871  230827 kubeadm.go:875] updating cluster {Name:calico-163439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-163439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:25:38.933984  230827 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:25:38.934042  230827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:25:39.019955  230827 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:25:39.019985  230827 crio.go:433] Images already preloaded, skipping extraction
	I0929 11:25:39.020048  230827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:25:39.069109  230827 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:25:39.069131  230827 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:25:39.069140  230827 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 crio true true} ...
	I0929 11:25:39.069229  230827 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-163439 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-163439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0929 11:25:39.069309  230827 ssh_runner.go:195] Run: crio config
	I0929 11:25:39.120886  230827 cni.go:84] Creating CNI manager for "calico"
	I0929 11:25:39.120911  230827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:25:39.120938  230827 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-163439 NodeName:calico-163439 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:25:39.121078  230827 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-163439"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:25:39.121148  230827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:25:39.130123  230827 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:25:39.130200  230827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:25:39.138879  230827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 11:25:39.165107  230827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:25:39.184995  230827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0929 11:25:39.203679  230827 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 11:25:39.208648  230827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:25:39.219707  230827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:25:39.299162  230827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:25:39.313450  230827 certs.go:68] Setting up /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439 for IP: 192.168.76.2
	I0929 11:25:39.313472  230827 certs.go:194] generating shared ca certs ...
	I0929 11:25:39.313487  230827 certs.go:226] acquiring lock for ca certs: {Name:mkddeaa430ffcc39cce53e20ea2b5588c6828a36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:25:39.313625  230827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21657-2306/.minikube/ca.key
	I0929 11:25:39.313678  230827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.key
	I0929 11:25:39.313690  230827 certs.go:256] generating profile certs ...
	I0929 11:25:39.313746  230827 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/client.key
	I0929 11:25:39.313761  230827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/client.crt with IP's: []
	I0929 11:25:39.662204  230827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/client.crt ...
	I0929 11:25:39.662236  230827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/client.crt: {Name:mk986b150aff5a2181fd17c2d26e999b5cb2d216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:25:39.663060  230827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/client.key ...
	I0929 11:25:39.663076  230827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/client.key: {Name:mkd0fe775f8043a5c36ad8051ca31dcf27564789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:25:39.663201  230827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.key.c1e11e3a
	I0929 11:25:39.663220  230827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.crt.c1e11e3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0929 11:25:39.935138  230827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.crt.c1e11e3a ...
	I0929 11:25:39.935166  230827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.crt.c1e11e3a: {Name:mk1cf871ba589ed1a170a9171de9af811993ac03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:25:39.935349  230827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.key.c1e11e3a ...
	I0929 11:25:39.935365  230827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.key.c1e11e3a: {Name:mk1b86eec31c5ae59097cd54958be3abf208f382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:25:39.935453  230827 certs.go:381] copying /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.crt.c1e11e3a -> /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.crt
	I0929 11:25:39.935548  230827 certs.go:385] copying /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.key.c1e11e3a -> /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.key
	I0929 11:25:39.935606  230827 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/proxy-client.key
	I0929 11:25:39.935625  230827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/proxy-client.crt with IP's: []
	I0929 11:25:40.940216  230827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/proxy-client.crt ...
	I0929 11:25:40.940256  230827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/proxy-client.crt: {Name:mkcd133f5495b8fed25c82f21011d4fa867079b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:25:40.940458  230827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/proxy-client.key ...
	I0929 11:25:40.940470  230827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/proxy-client.key: {Name:mkd7e1bcd8923abae611fac6429c1659fb4a3be1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:25:40.941318  230827 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/4108.pem (1338 bytes)
	W0929 11:25:40.941368  230827 certs.go:480] ignoring /home/jenkins/minikube-integration/21657-2306/.minikube/certs/4108_empty.pem, impossibly tiny 0 bytes
	I0929 11:25:40.941385  230827 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 11:25:40.941411  230827 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/ca.pem (1082 bytes)
	I0929 11:25:40.941441  230827 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:25:40.941468  230827 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/certs/key.pem (1679 bytes)
	I0929 11:25:40.941515  230827 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-2306/.minikube/files/etc/ssl/certs/41082.pem (1708 bytes)
	I0929 11:25:40.942092  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:25:40.974863  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 11:25:41.018524  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:25:41.050555  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 11:25:41.077496  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 11:25:41.104333  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 11:25:41.130592  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:25:41.159708  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/calico-163439/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 11:25:41.187087  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/certs/4108.pem --> /usr/share/ca-certificates/4108.pem (1338 bytes)
	I0929 11:25:41.213951  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/files/etc/ssl/certs/41082.pem --> /usr/share/ca-certificates/41082.pem (1708 bytes)
	I0929 11:25:41.239721  230827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-2306/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:25:41.267617  230827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:25:41.286299  230827 ssh_runner.go:195] Run: openssl version
	I0929 11:25:41.292032  230827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41082.pem && ln -fs /usr/share/ca-certificates/41082.pem /etc/ssl/certs/41082.pem"
	I0929 11:25:41.302124  230827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41082.pem
	I0929 11:25:41.306069  230827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 10:28 /usr/share/ca-certificates/41082.pem
	I0929 11:25:41.306196  230827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41082.pem
	I0929 11:25:41.313770  230827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41082.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 11:25:41.323800  230827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:25:41.333475  230827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:25:41.337110  230827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:25:41.337217  230827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:25:41.344396  230827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:25:41.354039  230827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4108.pem && ln -fs /usr/share/ca-certificates/4108.pem /etc/ssl/certs/4108.pem"
	I0929 11:25:41.364698  230827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4108.pem
	I0929 11:25:41.368568  230827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 10:28 /usr/share/ca-certificates/4108.pem
	I0929 11:25:41.368631  230827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4108.pem
	I0929 11:25:41.376170  230827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4108.pem /etc/ssl/certs/51391683.0"
	I0929 11:25:41.386234  230827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:25:41.389778  230827 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 11:25:41.389832  230827 kubeadm.go:392] StartCluster: {Name:calico-163439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-163439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:25:41.389923  230827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 11:25:41.389988  230827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:25:41.440637  230827 cri.go:89] found id: ""
	I0929 11:25:41.440774  230827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:25:41.449975  230827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:25:41.460362  230827 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 11:25:41.460434  230827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:25:41.470603  230827 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 11:25:41.470621  230827 kubeadm.go:157] found existing configuration files:
	
	I0929 11:25:41.470675  230827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 11:25:41.480720  230827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 11:25:41.480808  230827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 11:25:41.490095  230827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 11:25:41.499604  230827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 11:25:41.499669  230827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:25:41.508668  230827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 11:25:41.518683  230827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 11:25:41.518750  230827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:25:41.527855  230827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 11:25:41.537684  230827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 11:25:41.537773  230827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:25:41.547404  230827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 11:25:41.593541  230827 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 11:25:41.593777  230827 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 11:25:41.612084  230827 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 11:25:41.612175  230827 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0929 11:25:41.612236  230827 kubeadm.go:310] OS: Linux
	I0929 11:25:41.612315  230827 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 11:25:41.612380  230827 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0929 11:25:41.612444  230827 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 11:25:41.612510  230827 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 11:25:41.612575  230827 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 11:25:41.612656  230827 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 11:25:41.612711  230827 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 11:25:41.612776  230827 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 11:25:41.612841  230827 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0929 11:25:41.699306  230827 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 11:25:41.699420  230827 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 11:25:41.699533  230827 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 11:25:41.707854  230827 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 11:25:41.713904  230827 out.go:252]   - Generating certificates and keys ...
	I0929 11:25:41.714035  230827 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 11:25:41.714126  230827 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 11:25:41.905094  230827 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 11:25:42.356067  230827 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 11:25:43.166507  230827 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 11:25:43.927390  230827 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 11:25:44.194665  230827 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 11:25:44.194820  230827 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-163439 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0929 11:25:44.454564  230827 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 11:25:44.454714  230827 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-163439 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0929 11:25:45.340864  230827 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 11:25:45.477136  230827 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 11:25:46.162930  230827 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 11:25:46.163164  230827 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 11:25:46.391386  230827 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 11:25:46.530157  230827 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 11:25:47.025422  230827 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 11:25:47.569853  230827 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 11:25:47.922925  230827 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 11:25:47.923714  230827 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 11:25:47.927641  230827 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 11:25:47.931170  230827 out.go:252]   - Booting up control plane ...
	I0929 11:25:47.931285  230827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 11:25:47.931368  230827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 11:25:47.932110  230827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 11:25:47.942842  230827 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 11:25:47.943347  230827 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 11:25:47.950187  230827 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 11:25:47.950547  230827 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 11:25:47.950594  230827 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 11:25:48.064276  230827 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 11:25:48.064404  230827 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 11:25:49.066627  230827 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001576195s
	I0929 11:25:49.069704  230827 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 11:25:49.069803  230827 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0929 11:25:49.069927  230827 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 11:25:49.070011  230827 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 11:25:51.353437  230827 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.282681579s
	I0929 11:25:53.640098  230827 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.570378404s
	I0929 11:25:55.571348  230827 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.501528279s
	I0929 11:25:55.592539  230827 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 11:25:55.623920  230827 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 11:25:55.642533  230827 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 11:25:55.642748  230827 kubeadm.go:310] [mark-control-plane] Marking the node calico-163439 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 11:25:55.655745  230827 kubeadm.go:310] [bootstrap-token] Using token: 0t9jvh.q2pk5k4oc63j2doa
	I0929 11:25:55.658966  230827 out.go:252]   - Configuring RBAC rules ...
	I0929 11:25:55.659094  230827 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 11:25:55.668496  230827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 11:25:55.678752  230827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 11:25:55.682837  230827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 11:25:55.686888  230827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 11:25:55.693727  230827 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 11:25:55.980602  230827 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 11:25:56.451433  230827 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 11:25:56.978964  230827 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 11:25:56.980326  230827 kubeadm.go:310] 
	I0929 11:25:56.980401  230827 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 11:25:56.980411  230827 kubeadm.go:310] 
	I0929 11:25:56.980493  230827 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 11:25:56.980497  230827 kubeadm.go:310] 
	I0929 11:25:56.980524  230827 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 11:25:56.980586  230827 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 11:25:56.980639  230827 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 11:25:56.980644  230827 kubeadm.go:310] 
	I0929 11:25:56.980700  230827 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 11:25:56.980705  230827 kubeadm.go:310] 
	I0929 11:25:56.980755  230827 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 11:25:56.980759  230827 kubeadm.go:310] 
	I0929 11:25:56.980813  230827 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 11:25:56.980899  230827 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 11:25:56.980973  230827 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 11:25:56.980978  230827 kubeadm.go:310] 
	I0929 11:25:56.981066  230827 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 11:25:56.981153  230827 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 11:25:56.981157  230827 kubeadm.go:310] 
	I0929 11:25:56.981245  230827 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0t9jvh.q2pk5k4oc63j2doa \
	I0929 11:25:56.981354  230827 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:392fe149ecb175ae356dba308b7f8297c4b5919f46577a9f98ac6b1b62a4c584 \
	I0929 11:25:56.981376  230827 kubeadm.go:310] 	--control-plane 
	I0929 11:25:56.981380  230827 kubeadm.go:310] 
	I0929 11:25:56.981469  230827 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 11:25:56.981474  230827 kubeadm.go:310] 
	I0929 11:25:56.981560  230827 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0t9jvh.q2pk5k4oc63j2doa \
	I0929 11:25:56.981667  230827 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:392fe149ecb175ae356dba308b7f8297c4b5919f46577a9f98ac6b1b62a4c584 
	I0929 11:25:56.986334  230827 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0929 11:25:56.990712  230827 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0929 11:25:56.990946  230827 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 11:25:56.990963  230827 cni.go:84] Creating CNI manager for "calico"
	I0929 11:25:56.995870  230827 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0929 11:25:56.999532  230827 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 11:25:56.999564  230827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0929 11:25:57.027091  230827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 11:25:58.795153  230827 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.767989278s)
	I0929 11:25:58.795218  230827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:25:58.795322  230827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:25:58.795411  230827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-163439 minikube.k8s.io/updated_at=2025_09_29T11_25_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170 minikube.k8s.io/name=calico-163439 minikube.k8s.io/primary=true
	I0929 11:25:58.957139  230827 ops.go:34] apiserver oom_adj: -16
	I0929 11:25:58.957310  230827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:25:59.457367  230827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:25:59.958286  230827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:26:00.457783  230827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:26:00.957810  230827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:26:01.457440  230827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:26:01.957376  230827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:26:02.066487  230827 kubeadm.go:1105] duration metric: took 3.271227489s to wait for elevateKubeSystemPrivileges
	I0929 11:26:02.066518  230827 kubeadm.go:394] duration metric: took 20.67669014s to StartCluster
	I0929 11:26:02.066536  230827 settings.go:142] acquiring lock: {Name:mk5a393e91300013a868ee870b6bf3cfd60dd530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:26:02.066612  230827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21657-2306/kubeconfig
	I0929 11:26:02.067766  230827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/kubeconfig: {Name:mk74c1842d39026f9853151eb440c757ec3be664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:26:02.068025  230827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 11:26:02.068039  230827 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:26:02.068332  230827 config.go:182] Loaded profile config "calico-163439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:26:02.068372  230827 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 11:26:02.068434  230827 addons.go:69] Setting storage-provisioner=true in profile "calico-163439"
	I0929 11:26:02.068449  230827 addons.go:238] Setting addon storage-provisioner=true in "calico-163439"
	I0929 11:26:02.068474  230827 host.go:66] Checking if "calico-163439" exists ...
	I0929 11:26:02.068939  230827 cli_runner.go:164] Run: docker container inspect calico-163439 --format={{.State.Status}}
	I0929 11:26:02.069228  230827 addons.go:69] Setting default-storageclass=true in profile "calico-163439"
	I0929 11:26:02.069270  230827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-163439"
	I0929 11:26:02.069604  230827 cli_runner.go:164] Run: docker container inspect calico-163439 --format={{.State.Status}}
	I0929 11:26:02.072141  230827 out.go:179] * Verifying Kubernetes components...
	I0929 11:26:02.075427  230827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:26:02.100883  230827 addons.go:238] Setting addon default-storageclass=true in "calico-163439"
	I0929 11:26:02.100923  230827 host.go:66] Checking if "calico-163439" exists ...
	I0929 11:26:02.101327  230827 cli_runner.go:164] Run: docker container inspect calico-163439 --format={{.State.Status}}
	I0929 11:26:02.119219  230827 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:26:02.122467  230827 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:26:02.122494  230827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:26:02.122558  230827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-163439
	I0929 11:26:02.133465  230827 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:26:02.133486  230827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:26:02.133547  230827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-163439
	I0929 11:26:02.161521  230827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/calico-163439/id_rsa Username:docker}
	I0929 11:26:02.170789  230827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/calico-163439/id_rsa Username:docker}
	I0929 11:26:02.381299  230827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:26:02.381523  230827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:26:02.381799  230827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 11:26:02.393681  230827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:26:02.849176  230827 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0929 11:26:02.851044  230827 node_ready.go:35] waiting up to 15m0s for node "calico-163439" to be "Ready" ...
	I0929 11:26:03.177385  230827 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0929 11:26:03.180324  230827 addons.go:514] duration metric: took 1.111921637s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0929 11:26:03.355757  230827 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-163439" context rescaled to 1 replicas
	W0929 11:26:04.856321  230827 node_ready.go:57] node "calico-163439" has "Ready":"False" status (will retry)
	W0929 11:26:07.354606  230827 node_ready.go:57] node "calico-163439" has "Ready":"False" status (will retry)
	I0929 11:26:07.854735  230827 node_ready.go:49] node "calico-163439" is "Ready"
	I0929 11:26:07.854767  230827 node_ready.go:38] duration metric: took 5.00359257s for node "calico-163439" to be "Ready" ...
	I0929 11:26:07.854780  230827 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:26:07.854851  230827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:26:07.867677  230827 api_server.go:72] duration metric: took 5.799608019s to wait for apiserver process to appear ...
	I0929 11:26:07.867703  230827 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:26:07.867722  230827 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 11:26:07.876775  230827 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0929 11:26:07.877933  230827 api_server.go:141] control plane version: v1.34.0
	I0929 11:26:07.877959  230827 api_server.go:131] duration metric: took 10.249369ms to wait for apiserver health ...
	I0929 11:26:07.877968  230827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:26:07.881480  230827 system_pods.go:59] 9 kube-system pods found
	I0929 11:26:07.881524  230827 system_pods.go:61] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:07.881534  230827 system_pods.go:61] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:07.881542  230827 system_pods.go:61] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:07.881548  230827 system_pods.go:61] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:07.881558  230827 system_pods.go:61] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:07.881564  230827 system_pods.go:61] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:26:07.881573  230827 system_pods.go:61] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:07.881578  230827 system_pods.go:61] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:07.881584  230827 system_pods.go:61] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:26:07.881595  230827 system_pods.go:74] duration metric: took 3.619741ms to wait for pod list to return data ...
	I0929 11:26:07.881604  230827 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:26:07.884287  230827 default_sa.go:45] found service account: "default"
	I0929 11:26:07.884313  230827 default_sa.go:55] duration metric: took 2.698607ms for default service account to be created ...
	I0929 11:26:07.884324  230827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:26:07.887344  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:07.887380  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:07.887389  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:07.887423  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:07.887429  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:07.887434  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:07.887441  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:26:07.887457  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:07.887463  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:07.887468  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:26:07.887501  230827 retry.go:31] will retry after 234.183576ms: missing components: kube-dns
	I0929 11:26:08.139182  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:08.139219  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:08.139230  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:08.139271  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:08.139283  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:08.139290  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:08.139296  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:26:08.139305  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:08.139310  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:08.139316  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:26:08.139356  230827 retry.go:31] will retry after 327.024148ms: missing components: kube-dns
	I0929 11:26:08.470969  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:08.471002  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:08.471012  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:08.471053  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:08.471069  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:08.471075  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:08.471082  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:26:08.471091  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:08.471096  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:08.471114  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:26:08.471180  230827 retry.go:31] will retry after 361.735581ms: missing components: kube-dns
	I0929 11:26:08.837128  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:08.837169  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:08.837199  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:08.837214  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:08.837226  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:08.837232  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:08.837237  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:08.837247  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:08.837252  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:08.837264  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:08.837279  230827 retry.go:31] will retry after 428.72563ms: missing components: kube-dns
	I0929 11:26:09.270173  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:09.270207  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:09.270218  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:09.270237  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:09.270248  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:09.270255  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:09.270261  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:09.270267  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:09.270275  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:09.270282  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:09.270297  230827 retry.go:31] will retry after 701.955721ms: missing components: kube-dns
	I0929 11:26:09.976193  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:09.976232  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:09.976242  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:09.976250  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:09.976255  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:09.976261  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:09.976273  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:09.976277  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:09.976283  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:09.976287  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:09.976301  230827 retry.go:31] will retry after 855.43982ms: missing components: kube-dns
	I0929 11:26:10.835813  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:10.835860  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:10.835871  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:10.835879  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:10.835884  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:10.835889  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:10.835942  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:10.835947  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:10.835952  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:10.835959  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:10.835973  230827 retry.go:31] will retry after 939.06254ms: missing components: kube-dns
	I0929 11:26:11.779501  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:11.779538  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:11.779548  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:11.779556  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:11.779563  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:11.779569  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:11.779573  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:11.779584  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:11.779589  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:11.779596  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:11.779611  230827 retry.go:31] will retry after 1.337658615s: missing components: kube-dns
	I0929 11:26:13.122664  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:13.122702  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:13.122716  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:13.122724  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:13.122730  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:13.122736  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:13.122740  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:13.122744  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:13.122748  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:13.122752  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:13.122766  230827 retry.go:31] will retry after 1.2305819s: missing components: kube-dns
	I0929 11:26:14.369374  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:14.369467  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:14.369495  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:14.369517  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:14.369546  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:14.369575  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:14.369604  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:14.369623  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:14.369641  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:14.369663  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:14.369710  230827 retry.go:31] will retry after 1.826626473s: missing components: kube-dns
	I0929 11:26:16.216658  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:16.216701  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:16.216710  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:16.216718  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:16.216723  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:16.216729  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:16.216734  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:16.216738  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:16.216742  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:16.216746  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:16.216762  230827 retry.go:31] will retry after 2.333746249s: missing components: kube-dns
	I0929 11:26:18.556339  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:18.556378  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:18.556391  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:18.556399  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:18.556403  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:18.556411  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:18.556415  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:18.556419  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:18.556424  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:18.556429  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:18.556444  230827 retry.go:31] will retry after 2.829396404s: missing components: kube-dns
	I0929 11:26:21.391958  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:21.391991  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:21.392002  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:21.392009  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:21.392014  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:21.392020  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:21.392024  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:21.392029  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:21.392033  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:21.392036  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:21.392049  230827 retry.go:31] will retry after 3.524096752s: missing components: kube-dns
	I0929 11:26:24.923319  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:24.923350  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:24.923361  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:24.923369  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:24.923375  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:24.923382  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:24.923386  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:24.923391  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:24.923394  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:24.923398  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:24.923411  230827 retry.go:31] will retry after 5.469438734s: missing components: kube-dns
	I0929 11:26:30.400092  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:30.400130  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:30.400153  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:30.400164  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:30.400173  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:30.400179  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:30.400185  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:30.400190  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:30.400193  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:30.400197  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:30.400210  230827 retry.go:31] will retry after 6.417818819s: missing components: kube-dns
	I0929 11:26:36.821886  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:36.821924  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:36.821935  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:36.821943  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:36.821949  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:36.821956  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:36.821961  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:36.821966  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:36.821972  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:36.821987  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:36.822001  230827 retry.go:31] will retry after 5.536374624s: missing components: kube-dns
	I0929 11:26:42.365702  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:42.365739  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:42.365750  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:42.365759  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:42.365765  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:42.365772  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:42.365777  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:42.365784  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:42.365788  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:42.365792  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:42.365808  230827 retry.go:31] will retry after 7.354128246s: missing components: kube-dns
	I0929 11:26:49.727691  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:49.727727  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:49.727737  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:49.727745  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:49.727750  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:49.727755  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:49.727759  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:49.727764  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:49.727768  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:49.727771  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:49.727784  230827 retry.go:31] will retry after 9.69586151s: missing components: kube-dns
	I0929 11:26:59.428637  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:26:59.428678  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:26:59.428687  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:26:59.428696  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:26:59.428700  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:26:59.428707  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:26:59.428712  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:26:59.428716  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:26:59.428721  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:26:59.428731  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:26:59.428746  230827 retry.go:31] will retry after 14.139329475s: missing components: kube-dns
	I0929 11:27:13.572161  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:27:13.572206  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:27:13.572218  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:27:13.572228  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:27:13.572233  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:27:13.572239  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:27:13.572243  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:27:13.572248  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:27:13.572256  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:27:13.572261  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:27:13.572277  230827 retry.go:31] will retry after 18.391730013s: missing components: kube-dns
	I0929 11:27:31.968678  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:27:31.968708  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:27:31.968720  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:27:31.968728  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:27:31.968733  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:27:31.968738  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:27:31.968742  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:27:31.968746  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:27:31.968750  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:27:31.968753  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:27:31.968768  230827 retry.go:31] will retry after 20.543697325s: missing components: kube-dns
	I0929 11:27:52.516783  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:27:52.516821  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:27:52.516831  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:27:52.516840  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:27:52.516845  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:27:52.516853  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:27:52.516858  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:27:52.516873  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:27:52.516878  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:27:52.516894  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:27:52.516912  230827 retry.go:31] will retry after 32.633619048s: missing components: kube-dns
	I0929 11:28:25.157216  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:28:25.157262  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:28:25.157275  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:28:25.157283  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:28:25.157288  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:28:25.157296  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:28:25.157312  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:28:25.157325  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:28:25.157330  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:28:25.157347  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:28:25.157363  230827 retry.go:31] will retry after 37.927903203s: missing components: kube-dns
	I0929 11:29:03.091376  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:29:03.091412  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:29:03.091423  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:29:03.091430  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:29:03.091435  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:29:03.091441  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:29:03.091444  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:29:03.091449  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:29:03.091453  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:29:03.091456  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:29:03.091470  230827 retry.go:31] will retry after 49.093987141s: missing components: kube-dns
	I0929 11:29:52.191301  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:29:52.191335  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:29:52.191346  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:29:52.191353  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:29:52.191359  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:29:52.191365  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:29:52.191370  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:29:52.191374  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:29:52.191378  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:29:52.191383  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:29:52.191396  230827 retry.go:31] will retry after 1m1.705041598s: missing components: kube-dns
	I0929 11:30:53.902169  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:30:53.902216  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:30:53.902228  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:30:53.902235  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:30:53.902240  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:30:53.902245  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:30:53.902250  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:30:53.902255  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:30:53.902259  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:30:53.902264  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:30:53.902278  230827 retry.go:31] will retry after 54.511203617s: missing components: kube-dns
	I0929 11:31:48.417357  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:31:48.417396  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:31:48.417407  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:31:48.417414  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:31:48.417419  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:31:48.417425  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:31:48.417429  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:31:48.417433  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:31:48.417438  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:31:48.417448  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:31:48.417462  230827 retry.go:31] will retry after 1m13.609934589s: missing components: kube-dns
	I0929 11:33:02.031667  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:33:02.031722  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:33:02.031734  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:33:02.031744  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:33:02.031749  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:33:02.031757  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:33:02.031762  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:33:02.031767  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:33:02.031791  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:33:02.031800  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:33:02.031821  230827 retry.go:31] will retry after 1m5.017477671s: missing components: kube-dns
	I0929 11:34:07.053142  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:34:07.053180  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:34:07.053190  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:34:07.053198  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:34:07.053202  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:34:07.053208  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:34:07.053212  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:34:07.053217  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:34:07.053222  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:34:07.053232  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:34:07.053247  230827 retry.go:31] will retry after 1m9.910916872s: missing components: kube-dns
	I0929 11:35:16.968744  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:35:16.968784  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:35:16.968795  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:35:16.968803  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:35:16.968808  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:35:16.968814  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:35:16.968818  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:35:16.968828  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:35:16.968833  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:35:16.968841  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:35:16.968855  230827 retry.go:31] will retry after 1m6.057616219s: missing components: kube-dns
	I0929 11:36:23.029956  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:36:23.029996  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:36:23.030008  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:36:23.030016  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:36:23.030021  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:36:23.030027  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:36:23.030033  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:36:23.030043  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:36:23.030048  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:36:23.030059  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:36:23.030074  230827 retry.go:31] will retry after 57.487453525s: missing components: kube-dns
	I0929 11:37:20.525355  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:37:20.525394  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:37:20.525408  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:37:20.525420  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:37:20.525425  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:37:20.525432  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:37:20.525436  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:37:20.525448  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:37:20.525452  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:37:20.525457  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:37:20.525471  230827 retry.go:31] will retry after 1m7.977924186s: missing components: kube-dns
	I0929 11:38:28.507545  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:38:28.507586  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:38:28.507598  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:38:28.507607  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:38:28.507612  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:38:28.507618  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:38:28.507622  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:38:28.507628  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:38:28.507633  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:38:28.507637  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:38:28.507656  230827 retry.go:31] will retry after 1m12.680637114s: missing components: kube-dns
	I0929 11:39:41.192495  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:39:41.192536  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:39:41.192548  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:39:41.192556  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:39:41.192561  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:39:41.192568  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:39:41.192572  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:39:41.192577  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:39:41.192581  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:39:41.192587  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:39:41.192605  230827 retry.go:31] will retry after 1m5.275414044s: missing components: kube-dns
	I0929 11:40:46.472221  230827 system_pods.go:86] 9 kube-system pods found
	I0929 11:40:46.472260  230827 system_pods.go:89] "calico-kube-controllers-59556d9b4c-t5k7j" [f2fdcdd9-8ea7-47f1-ab44-7f3868a75ae1] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 11:40:46.472271  230827 system_pods.go:89] "calico-node-hk5rx" [16ff9d86-89ed-4516-b92f-428e0870daa8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 11:40:46.472278  230827 system_pods.go:89] "coredns-66bc5c9577-wm7j5" [c25a48d2-fb8c-4203-b428-d2ee99f793af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:40:46.472283  230827 system_pods.go:89] "etcd-calico-163439" [43d0771a-8810-4e6b-a69d-5d88b0b524d4] Running
	I0929 11:40:46.472289  230827 system_pods.go:89] "kube-apiserver-calico-163439" [ee8acf98-bda0-4581-b882-43e5b710fd22] Running
	I0929 11:40:46.472293  230827 system_pods.go:89] "kube-controller-manager-calico-163439" [d4def839-5828-4046-b77a-fa32c5b3e9f4] Running
	I0929 11:40:46.472297  230827 system_pods.go:89] "kube-proxy-mddpv" [855aa059-06c2-4610-b641-782768ccdeed] Running
	I0929 11:40:46.472301  230827 system_pods.go:89] "kube-scheduler-calico-163439" [1d8bfbcd-7b14-4324-bac6-1655fddb0cf8] Running
	I0929 11:40:46.472307  230827 system_pods.go:89] "storage-provisioner" [6bf96b4e-1130-4b96-9a7e-d3e876b3928c] Running
	I0929 11:40:46.475662  230827 out.go:203] 
	W0929 11:40:46.478705  230827 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0929 11:40:46.478725  230827 out.go:285] * 
	* 
	W0929 11:40:46.480878  230827 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0929 11:40:46.482774  230827 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (920.53s)

                                                
                                    

Test pass (287/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.69
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 5.86
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.18
18 TestDownloadOnly/v1.34.0/DeleteAll 0.31
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.19
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 179.37
31 TestAddons/serial/GCPAuth/Namespaces 0.24
32 TestAddons/serial/GCPAuth/FakeCredentials 10.91
35 TestAddons/parallel/Registry 16.72
36 TestAddons/parallel/RegistryCreds 0.74
38 TestAddons/parallel/InspektorGadget 6.37
39 TestAddons/parallel/MetricsServer 5.89
41 TestAddons/parallel/CSI 37.59
42 TestAddons/parallel/Headlamp 19.89
43 TestAddons/parallel/CloudSpanner 6.63
44 TestAddons/parallel/LocalPath 11.25
45 TestAddons/parallel/NvidiaDevicePlugin 5.77
46 TestAddons/parallel/Yakd 11.8
48 TestAddons/StoppedEnableDisable 12.13
49 TestCertOptions 44.37
50 TestCertExpiration 254.21
52 TestForceSystemdFlag 32.31
53 TestForceSystemdEnv 42.3
59 TestErrorSpam/setup 28.02
60 TestErrorSpam/start 0.77
61 TestErrorSpam/status 1.14
62 TestErrorSpam/pause 1.67
63 TestErrorSpam/unpause 1.76
64 TestErrorSpam/stop 1.48
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 83.88
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 28.59
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.79
76 TestFunctional/serial/CacheCmd/cache/add_local 1.37
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.32
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 30.96
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.79
87 TestFunctional/serial/LogsFileCmd 1.77
88 TestFunctional/serial/InvalidService 4.73
90 TestFunctional/parallel/ConfigCmd 0.42
91 TestFunctional/parallel/DashboardCmd 11.05
92 TestFunctional/parallel/DryRun 0.63
93 TestFunctional/parallel/InternationalLanguage 0.25
94 TestFunctional/parallel/StatusCmd 1.23
99 TestFunctional/parallel/AddonsCmd 0.23
100 TestFunctional/parallel/PersistentVolumeClaim 24.86
102 TestFunctional/parallel/SSHCmd 0.72
103 TestFunctional/parallel/CpCmd 2.49
105 TestFunctional/parallel/FileSync 0.35
106 TestFunctional/parallel/CertSync 2.08
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
114 TestFunctional/parallel/License 0.33
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.51
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
128 TestFunctional/parallel/ProfileCmd/profile_list 0.43
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
130 TestFunctional/parallel/MountCmd/any-port 9.13
131 TestFunctional/parallel/MountCmd/specific-port 1.7
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
133 TestFunctional/parallel/ServiceCmd/List 0.71
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
138 TestFunctional/parallel/Version/short 0.07
139 TestFunctional/parallel/Version/components 1.37
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.95
145 TestFunctional/parallel/ImageCommands/Setup 0.68
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.29
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.96
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.65
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.97
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.65
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 200.34
164 TestMultiControlPlane/serial/DeployApp 8.76
165 TestMultiControlPlane/serial/PingHostFromPods 1.62
166 TestMultiControlPlane/serial/AddWorkerNode 59.17
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
169 TestMultiControlPlane/serial/CopyFile 19.52
170 TestMultiControlPlane/serial/StopSecondaryNode 12.74
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
172 TestMultiControlPlane/serial/RestartSecondaryNode 35.45
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.17
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 124.58
175 TestMultiControlPlane/serial/DeleteSecondaryNode 12.38
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
177 TestMultiControlPlane/serial/StopCluster 35.52
178 TestMultiControlPlane/serial/RestartCluster 78.38
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
180 TestMultiControlPlane/serial/AddSecondaryNode 81.74
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 48.35
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.73
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.64
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.82
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 38.92
211 TestKicCustomNetwork/use_default_bridge_network 31.54
212 TestKicExistingNetwork 37.49
213 TestKicCustomSubnet 33.75
214 TestKicStaticIP 38.75
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 71.53
219 TestMountStart/serial/StartWithMountFirst 6.88
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 6.51
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.62
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.2
226 TestMountStart/serial/RestartStopped 8.15
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 105.44
231 TestMultiNode/serial/DeployApp2Nodes 6.69
232 TestMultiNode/serial/PingHostFrom2Pods 0.99
233 TestMultiNode/serial/AddNode 56.48
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.76
236 TestMultiNode/serial/CopyFile 10.16
237 TestMultiNode/serial/StopNode 2.3
238 TestMultiNode/serial/StartAfterStop 8.05
239 TestMultiNode/serial/RestartKeepsNodes 81.29
240 TestMultiNode/serial/DeleteNode 5.6
241 TestMultiNode/serial/StopMultiNode 23.76
242 TestMultiNode/serial/RestartMultiNode 56.43
243 TestMultiNode/serial/ValidateNameConflict 36.32
248 TestPreload 132.61
250 TestScheduledStopUnix 109.33
253 TestInsufficientStorage 10.99
254 TestRunningBinaryUpgrade 67.21
256 TestKubernetesUpgrade 126.92
257 TestMissingContainerUpgrade 123.29
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 46.37
261 TestNoKubernetes/serial/StartWithStopK8s 19.98
262 TestNoKubernetes/serial/Start 6.71
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
264 TestNoKubernetes/serial/ProfileList 0.91
265 TestNoKubernetes/serial/Stop 1.2
266 TestNoKubernetes/serial/StartNoArgs 7.3
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
268 TestStoppedBinaryUpgrade/Setup 0.72
269 TestStoppedBinaryUpgrade/Upgrade 65.06
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
279 TestPause/serial/Start 90.56
287 TestNetworkPlugins/group/false 3.71
291 TestPause/serial/SecondStartNoReconfiguration 28.72
292 TestPause/serial/Pause 1.16
293 TestPause/serial/VerifyStatus 0.48
294 TestPause/serial/Unpause 1.03
295 TestPause/serial/PauseAgain 1.42
296 TestPause/serial/DeletePaused 3.23
297 TestPause/serial/VerifyDeletedResources 0.51
299 TestStartStop/group/old-k8s-version/serial/FirstStart 62.17
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.46
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
302 TestStartStop/group/old-k8s-version/serial/Stop 11.92
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
304 TestStartStop/group/old-k8s-version/serial/SecondStart 57.92
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
308 TestStartStop/group/old-k8s-version/serial/Pause 3.11
310 TestStartStop/group/no-preload/serial/FirstStart 75.72
312 TestStartStop/group/embed-certs/serial/FirstStart 85.34
313 TestStartStop/group/no-preload/serial/DeployApp 10.39
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
315 TestStartStop/group/no-preload/serial/Stop 12
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/no-preload/serial/SecondStart 56.04
318 TestStartStop/group/embed-certs/serial/DeployApp 9.41
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.51
320 TestStartStop/group/embed-certs/serial/Stop 11.93
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
322 TestStartStop/group/embed-certs/serial/SecondStart 54.74
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
326 TestStartStop/group/no-preload/serial/Pause 4.12
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.16
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
332 TestStartStop/group/embed-certs/serial/Pause 3.44
334 TestStartStop/group/newest-cni/serial/FirstStart 33.15
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
337 TestStartStop/group/newest-cni/serial/Stop 1.55
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
339 TestStartStop/group/newest-cni/serial/SecondStart 15.53
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.43
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.65
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.32
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
346 TestStartStop/group/newest-cni/serial/Pause 3
347 TestNetworkPlugins/group/auto/Start 85.08
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.48
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 63.82
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.6
354 TestNetworkPlugins/group/auto/KubeletFlags 0.41
355 TestNetworkPlugins/group/auto/NetCatPod 12.47
356 TestNetworkPlugins/group/kindnet/Start 84.2
357 TestNetworkPlugins/group/auto/DNS 0.22
358 TestNetworkPlugins/group/auto/Localhost 0.18
359 TestNetworkPlugins/group/auto/HairPin 0.18
361 TestNetworkPlugins/group/kindnet/ControllerPod 6
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
363 TestNetworkPlugins/group/kindnet/NetCatPod 11.27
364 TestNetworkPlugins/group/kindnet/DNS 0.18
365 TestNetworkPlugins/group/kindnet/Localhost 0.15
366 TestNetworkPlugins/group/kindnet/HairPin 0.18
367 TestNetworkPlugins/group/custom-flannel/Start 57.06
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
370 TestNetworkPlugins/group/custom-flannel/DNS 0.2
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
373 TestNetworkPlugins/group/enable-default-cni/Start 75.7
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
379 TestNetworkPlugins/group/flannel/Start 56.9
380 TestNetworkPlugins/group/flannel/ControllerPod 6.01
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
382 TestNetworkPlugins/group/flannel/NetCatPod 10.25
383 TestNetworkPlugins/group/flannel/DNS 0.19
384 TestNetworkPlugins/group/flannel/Localhost 0.16
385 TestNetworkPlugins/group/flannel/HairPin 0.16
386 TestNetworkPlugins/group/bridge/Start 74.2
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
388 TestNetworkPlugins/group/bridge/NetCatPod 10.3
389 TestNetworkPlugins/group/bridge/DNS 0.17
390 TestNetworkPlugins/group/bridge/Localhost 0.17
391 TestNetworkPlugins/group/bridge/HairPin 0.21
x
+
TestDownloadOnly/v1.28.0/json-events (5.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-012075 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-012075 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.688048989s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 10:20:01.389470    4108 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0929 10:20:01.389560    4108 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-012075
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-012075: exit status 85 (90.530024ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-012075 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-012075 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:19:55
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:19:55.744431    4113 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:19:55.744653    4113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:55.744683    4113 out.go:374] Setting ErrFile to fd 2...
	I0929 10:19:55.744702    4113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:55.744968    4113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	W0929 10:19:55.745153    4113 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21657-2306/.minikube/config/config.json: open /home/jenkins/minikube-integration/21657-2306/.minikube/config/config.json: no such file or directory
	I0929 10:19:55.745588    4113 out.go:368] Setting JSON to true
	I0929 10:19:55.746445    4113 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":145,"bootTime":1759141051,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0929 10:19:55.746561    4113 start.go:140] virtualization:  
	I0929 10:19:55.750844    4113 out.go:99] [download-only-012075] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0929 10:19:55.751037    4113 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 10:19:55.751098    4113 notify.go:220] Checking for updates...
	I0929 10:19:55.754030    4113 out.go:171] MINIKUBE_LOCATION=21657
	I0929 10:19:55.757093    4113 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:19:55.760196    4113 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	I0929 10:19:55.763080    4113 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	I0929 10:19:55.765951    4113 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0929 10:19:55.771676    4113 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 10:19:55.771932    4113 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:19:55.804523    4113 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 10:19:55.804626    4113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:19:56.214326    4113 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-29 10:19:56.205121227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 10:19:56.214432    4113 docker.go:318] overlay module found
	I0929 10:19:56.217458    4113 out.go:99] Using the docker driver based on user configuration
	I0929 10:19:56.217497    4113 start.go:304] selected driver: docker
	I0929 10:19:56.217504    4113 start.go:924] validating driver "docker" against <nil>
	I0929 10:19:56.217646    4113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:19:56.274524    4113 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-29 10:19:56.26573347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 10:19:56.274698    4113 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:19:56.275000    4113 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0929 10:19:56.275183    4113 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 10:19:56.278297    4113 out.go:171] Using Docker driver with root privileges
	I0929 10:19:56.281038    4113 cni.go:84] Creating CNI manager for ""
	I0929 10:19:56.281116    4113 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:19:56.281130    4113 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 10:19:56.281206    4113 start.go:348] cluster config:
	{Name:download-only-012075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-012075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:19:56.284213    4113 out.go:99] Starting "download-only-012075" primary control-plane node in "download-only-012075" cluster
	I0929 10:19:56.284240    4113 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 10:19:56.287067    4113 out.go:99] Pulling base image v0.0.48 ...
	I0929 10:19:56.287098    4113 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 10:19:56.287296    4113 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:19:56.302519    4113 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:19:56.302672    4113 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:19:56.302783    4113 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:19:56.349106    4113 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0929 10:19:56.349132    4113 cache.go:58] Caching tarball of preloaded images
	I0929 10:19:56.349284    4113 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 10:19:56.352824    4113 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 10:19:56.352865    4113 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0929 10:19:56.443359    4113 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0929 10:19:59.413456    4113 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0929 10:19:59.413551    4113 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0929 10:20:00.447696    4113 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0929 10:20:00.448101    4113 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/download-only-012075/config.json ...
	I0929 10:20:00.448230    4113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/download-only-012075/config.json: {Name:mk9f11db3088ac76ebaa67dd0545d87ea870efd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:00.448476    4113 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 10:20:00.448697    4113 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21657-2306/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-012075 host does not exist
	  To start a cluster, run: "minikube start -p download-only-012075"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-012075
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-518557 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-518557 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.858211086s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 10:20:07.715063    4108 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0929 10:20:07.715100    4108 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-518557
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-518557: exit status 85 (180.555584ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-012075 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-012075 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 10:20 UTC │ 29 Sep 25 10:20 UTC │
	│ delete  │ -p download-only-012075                                                                                                                                                   │ download-only-012075 │ jenkins │ v1.37.0 │ 29 Sep 25 10:20 UTC │ 29 Sep 25 10:20 UTC │
	│ start   │ -o=json --download-only -p download-only-518557 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-518557 │ jenkins │ v1.37.0 │ 29 Sep 25 10:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:20:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:20:01.896110    4312 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:20:01.896277    4312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:20:01.896307    4312 out.go:374] Setting ErrFile to fd 2...
	I0929 10:20:01.896327    4312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:20:01.896592    4312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	I0929 10:20:01.897022    4312 out.go:368] Setting JSON to true
	I0929 10:20:01.897795    4312 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":151,"bootTime":1759141051,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0929 10:20:01.897895    4312 start.go:140] virtualization:  
	I0929 10:20:01.901471    4312 out.go:99] [download-only-518557] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 10:20:01.901822    4312 notify.go:220] Checking for updates...
	I0929 10:20:01.905769    4312 out.go:171] MINIKUBE_LOCATION=21657
	I0929 10:20:01.908715    4312 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:20:01.911705    4312 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	I0929 10:20:01.914599    4312 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	I0929 10:20:01.917677    4312 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0929 10:20:01.923428    4312 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 10:20:01.923696    4312 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:20:01.947384    4312 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 10:20:01.947501    4312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:20:02.016170    4312 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-29 10:20:02.005308733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 10:20:02.016278    4312 docker.go:318] overlay module found
	I0929 10:20:02.019515    4312 out.go:99] Using the docker driver based on user configuration
	I0929 10:20:02.019574    4312 start.go:304] selected driver: docker
	I0929 10:20:02.019582    4312 start.go:924] validating driver "docker" against <nil>
	I0929 10:20:02.019716    4312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:20:02.082117    4312 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-29 10:20:02.072671399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 10:20:02.082271    4312 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:20:02.082562    4312 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0929 10:20:02.082720    4312 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 10:20:02.085891    4312 out.go:171] Using Docker driver with root privileges
	I0929 10:20:02.088687    4312 cni.go:84] Creating CNI manager for ""
	I0929 10:20:02.088765    4312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:20:02.088779    4312 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 10:20:02.088858    4312 start.go:348] cluster config:
	{Name:download-only-518557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-518557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:20:02.091870    4312 out.go:99] Starting "download-only-518557" primary control-plane node in "download-only-518557" cluster
	I0929 10:20:02.091903    4312 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 10:20:02.094944    4312 out.go:99] Pulling base image v0.0.48 ...
	I0929 10:20:02.094988    4312 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:20:02.095110    4312 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:20:02.110606    4312 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:20:02.110743    4312 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:20:02.110761    4312 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:20:02.110767    4312 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:20:02.110774    4312 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:20:02.156996    4312 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0929 10:20:02.157030    4312 cache.go:58] Caching tarball of preloaded images
	I0929 10:20:02.157182    4312 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:20:02.160573    4312 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0929 10:20:02.160608    4312 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0929 10:20:02.248498    4312 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:36555bb244eebf6e383c5e8810b48b3a -> /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0929 10:20:06.094711    4312 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0929 10:20:06.094819    4312 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21657-2306/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-518557 host does not exist
	  To start a cluster, run: "minikube start -p download-only-518557"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-518557
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.19s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 10:20:09.603880    4108 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-445148 --alsologtostderr --binary-mirror http://127.0.0.1:40861 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-445148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-445148
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-718460
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-718460: exit status 85 (70.159441ms)

                                                
                                                
-- stdout --
	* Profile "addons-718460" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-718460"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-718460
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-718460: exit status 85 (75.22179ms)

                                                
                                                
-- stdout --
	* Profile "addons-718460" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-718460"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (179.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-718460 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-718460 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m59.372188314s)
--- PASS: TestAddons/Setup (179.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-718460 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-718460 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-718460 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-718460 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a150d30b-9143-4570-b3d9-25b2fabbccbc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a150d30b-9143-4570-b3d9-25b2fabbccbc] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003472208s
addons_test.go:694: (dbg) Run:  kubectl --context addons-718460 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-718460 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-718460 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-718460 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 11.657885ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-57h5w" [4eda4165-6bbd-4182-9ac0-af4c38cfe95f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00348761s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-g59x4" [41977be3-9774-4847-9974-3c460f42c342] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003315248s
addons_test.go:392: (dbg) Run:  kubectl --context addons-718460 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-718460 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-718460 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.709066314s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 ip
2025/09/29 10:23:45 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.72s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.643853ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-718460
addons_test.go:332: (dbg) Run:  kubectl --context addons-718460 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.37s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-c9wmw" [f0542ba6-b2fe-4c6a-a251-cb867c16c0c6] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004685551s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.37s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.379569ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-x77fq" [bc238f5c-4424-4932-8e89-26565d9fe1f2] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004172296s
addons_test.go:463: (dbg) Run:  kubectl --context addons-718460 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 10:23:57.612350    4108 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 10:23:57.621906    4108 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 10:23:57.621936    4108 kapi.go:107] duration metric: took 9.599778ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 9.61019ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-718460 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-718460 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [2758a193-84e6-40b4-bc77-e24c5ccf6b33] Pending
helpers_test.go:352: "task-pv-pod" [2758a193-84e6-40b4-bc77-e24c5ccf6b33] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [2758a193-84e6-40b4-bc77-e24c5ccf6b33] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004072592s
addons_test.go:572: (dbg) Run:  kubectl --context addons-718460 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-718460 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-718460 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-718460 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-718460 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-718460 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-718460 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [98eb8c85-57c9-4b01-959c-73e736de0cf8] Pending
helpers_test.go:352: "task-pv-pod-restore" [98eb8c85-57c9-4b01-959c-73e736de0cf8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [98eb8c85-57c9-4b01-959c-73e736de0cf8] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003880673s
addons_test.go:614: (dbg) Run:  kubectl --context addons-718460 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-718460 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-718460 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-718460 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.281669665s)
--- PASS: TestAddons/parallel/CSI (37.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-718460 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-718460 --alsologtostderr -v=1: (1.076856573s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-knk9c" [39c582af-610a-4851-96ff-20b5c1b8b4cc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-knk9c" [39c582af-610a-4851-96ff-20b5c1b8b4cc] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.002759082s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-718460 addons disable headlamp --alsologtostderr -v=1: (5.813794495s)
--- PASS: TestAddons/parallel/Headlamp (19.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-4b6f4" [1ed801e1-80d2-4e67-b050-c6cb5878be56] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003582935s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-718460 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-718460 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718460 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [9f7524d9-be9c-46d0-92d9-0ebfd841a50c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [9f7524d9-be9c-46d0-92d9-0ebfd841a50c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [9f7524d9-be9c-46d0-92d9-0ebfd841a50c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003286935s
addons_test.go:967: (dbg) Run:  kubectl --context addons-718460 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 ssh "cat /opt/local-path-provisioner/pvc-1afac192-e0ae-4f4a-af4b-19ffc8f3bcd9_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-718460 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-718460 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-79j7x" [6a19a5b3-4a20-4806-9b2c-9b12c44d4be1] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.012723371s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.77s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-lvx46" [d63d6b17-2c7f-47f8-8048-a11aff6da1b3] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003677613s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-718460 addons disable yakd --alsologtostderr -v=1: (5.798846771s)
--- PASS: TestAddons/parallel/Yakd (11.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-718460
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-718460: (11.859268338s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-718460
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-718460
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-718460
--- PASS: TestAddons/StoppedEnableDisable (12.13s)

                                                
                                    
x
+
TestCertOptions (44.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-945820 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0929 11:15:42.963241    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-945820 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (41.68128491s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-945820 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-945820 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-945820 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-945820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-945820
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-945820: (1.992348159s)
--- PASS: TestCertOptions (44.37s)

                                                
                                    
x
+
TestCertExpiration (254.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-392897 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-392897 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.044126234s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-392897 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-392897 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (30.131212067s)
helpers_test.go:175: Cleaning up "cert-expiration-392897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-392897
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-392897: (3.03391267s)
--- PASS: TestCertExpiration (254.21s)

                                                
                                    
x
+
TestForceSystemdFlag (32.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-830061 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-830061 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.541964453s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-830061 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-830061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-830061
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-830061: (2.443805367s)
--- PASS: TestForceSystemdFlag (32.31s)

                                                
                                    
x
+
TestForceSystemdEnv (42.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-934241 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-934241 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.588820284s)
helpers_test.go:175: Cleaning up "force-systemd-env-934241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-934241
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-934241: (2.706943049s)
--- PASS: TestForceSystemdEnv (42.30s)

                                                
                                    
x
+
TestErrorSpam/setup (28.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-818484 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-818484 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-818484 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-818484 --driver=docker  --container-runtime=crio: (28.014241581s)
--- PASS: TestErrorSpam/setup (28.02s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 pause
--- PASS: TestErrorSpam/pause (1.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 stop: (1.280167948s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-818484 --log_dir /tmp/nospam-818484 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21657-2306/.minikube/files/etc/test/nested/copy/4108/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-599498 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0929 10:28:10.551497    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:10.559257    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:10.570559    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:10.591912    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:10.633253    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:10.714628    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:10.876097    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:11.197739    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:11.839625    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:13.121385    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:15.683283    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:20.805301    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:31.046638    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:28:51.528044    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-599498 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m23.880558445s)
--- PASS: TestFunctional/serial/StartWithProxy (83.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 10:29:24.809202    4108 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-599498 --alsologtostderr -v=8
E0929 10:29:32.490281    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-599498 --alsologtostderr -v=8: (28.586492128s)
functional_test.go:678: soft start took 28.592136192s for "functional-599498" cluster.
I0929 10:29:53.396026    4108 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (28.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-599498 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-599498 cache add registry.k8s.io/pause:3.1: (1.265995402s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-599498 cache add registry.k8s.io/pause:3.3: (1.255898608s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-599498 cache add registry.k8s.io/pause:latest: (1.2647413s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-599498 /tmp/TestFunctionalserialCacheCmdcacheadd_local180809464/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 cache add minikube-local-cache-test:functional-599498
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 cache delete minikube-local-cache-test:functional-599498
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-599498
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.881792ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-599498 cache reload: (1.365640762s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 kubectl -- --context functional-599498 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-599498 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-599498 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-599498 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.956389372s)
functional_test.go:776: restart took 30.956494072s for "functional-599498" cluster.
I0929 10:30:32.824592    4108 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (30.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-599498 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-599498 logs: (1.792526882s)
--- PASS: TestFunctional/serial/LogsCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 logs --file /tmp/TestFunctionalserialLogsFileCmd1392381289/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-599498 logs --file /tmp/TestFunctionalserialLogsFileCmd1392381289/001/logs.txt: (1.766477261s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.73s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-599498 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-599498
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-599498: exit status 115 (489.789188ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32225 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-599498 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.73s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 config get cpus: exit status 14 (67.280795ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 config get cpus: exit status 14 (55.433135ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-599498 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-599498 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 33627: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-599498 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-599498 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (287.931195ms)

                                                
                                                
-- stdout --
	* [functional-599498] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:41:12.294082   33077 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:41:12.294791   33077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:41:12.294931   33077 out.go:374] Setting ErrFile to fd 2...
	I0929 10:41:12.294944   33077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:41:12.295950   33077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	I0929 10:41:12.297738   33077 out.go:368] Setting JSON to false
	I0929 10:41:12.305151   33077 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1422,"bootTime":1759141051,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0929 10:41:12.305247   33077 start.go:140] virtualization:  
	I0929 10:41:12.309057   33077 out.go:179] * [functional-599498] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 10:41:12.312056   33077 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:41:12.312242   33077 notify.go:220] Checking for updates...
	I0929 10:41:12.318070   33077 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:41:12.321173   33077 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	I0929 10:41:12.324066   33077 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	I0929 10:41:12.327019   33077 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 10:41:12.329963   33077 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:41:12.333416   33077 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:41:12.334000   33077 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:41:12.370767   33077 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 10:41:12.370887   33077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:41:12.455239   33077 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 10:41:12.445572749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 10:41:12.455348   33077 docker.go:318] overlay module found
	I0929 10:41:12.458559   33077 out.go:179] * Using the docker driver based on existing profile
	I0929 10:41:12.461477   33077 start.go:304] selected driver: docker
	I0929 10:41:12.461496   33077 start.go:924] validating driver "docker" against &{Name:functional-599498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-599498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:41:12.461602   33077 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:41:12.465229   33077 out.go:203] 
	W0929 10:41:12.467949   33077 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 10:41:12.470744   33077 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-599498 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-599498 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-599498 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (247.343614ms)

                                                
                                                
-- stdout --
	* [functional-599498] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:41:12.016112   33008 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:41:12.016378   33008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:41:12.016412   33008 out.go:374] Setting ErrFile to fd 2...
	I0929 10:41:12.016433   33008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:41:12.016860   33008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	I0929 10:41:12.017429   33008 out.go:368] Setting JSON to false
	I0929 10:41:12.018252   33008 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1421,"bootTime":1759141051,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0929 10:41:12.018324   33008 start.go:140] virtualization:  
	I0929 10:41:12.021854   33008 out.go:179] * [functional-599498] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0929 10:41:12.025748   33008 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:41:12.025996   33008 notify.go:220] Checking for updates...
	I0929 10:41:12.032452   33008 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:41:12.035366   33008 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	I0929 10:41:12.038305   33008 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	I0929 10:41:12.041273   33008 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 10:41:12.044237   33008 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:41:12.048627   33008 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:41:12.049193   33008 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:41:12.087596   33008 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 10:41:12.087749   33008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:41:12.165915   33008 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 10:41:12.156132719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 10:41:12.166019   33008 docker.go:318] overlay module found
	I0929 10:41:12.169259   33008 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 10:41:12.172081   33008 start.go:304] selected driver: docker
	I0929 10:41:12.172100   33008 start.go:924] validating driver "docker" against &{Name:functional-599498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-599498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:41:12.172188   33008 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:41:12.175761   33008 out.go:203] 
	W0929 10:41:12.178730   33008 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 10:41:12.181714   33008 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [1523dcbc-3074-4a83-b79d-31d0fc96a8b0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003789986s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-599498 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-599498 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-599498 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-599498 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b1a1715a-af4a-4451-9556-414bd08f1fff] Pending
helpers_test.go:352: "sp-pod" [b1a1715a-af4a-4451-9556-414bd08f1fff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b1a1715a-af4a-4451-9556-414bd08f1fff] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.029806125s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-599498 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-599498 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-599498 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7915bf28-948a-4a08-bce6-d8997ce54f9f] Pending
helpers_test.go:352: "sp-pod" [7915bf28-948a-4a08-bce6-d8997ce54f9f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7915bf28-948a-4a08-bce6-d8997ce54f9f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00382971s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-599498 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh -n functional-599498 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 cp functional-599498:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3637814465/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh -n functional-599498 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh -n functional-599498 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4108/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "sudo cat /etc/test/nested/copy/4108/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4108.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "sudo cat /etc/ssl/certs/4108.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4108.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "sudo cat /usr/share/ca-certificates/4108.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41082.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "sudo cat /etc/ssl/certs/41082.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41082.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "sudo cat /usr/share/ca-certificates/41082.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-599498 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 ssh "sudo systemctl is-active docker": exit status 1 (309.207272ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 ssh "sudo systemctl is-active containerd": exit status 1 (361.498877ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-599498 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-599498 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-599498 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 29338: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-599498 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-599498 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-599498 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [aff64586-aa8f-4e3e-ae61-2bb1af0411a8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [aff64586-aa8f-4e3e-ae61-2bb1af0411a8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00379984s
I0929 10:30:52.465324    4108 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-599498 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.53.6 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-599498 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "377.306238ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.041688ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "352.06433ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "53.920269ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-599498 /tmp/TestFunctionalparallelMountCmdany-port2124093550/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759142458278378914" to /tmp/TestFunctionalparallelMountCmdany-port2124093550/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759142458278378914" to /tmp/TestFunctionalparallelMountCmdany-port2124093550/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759142458278378914" to /tmp/TestFunctionalparallelMountCmdany-port2124093550/001/test-1759142458278378914
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.289736ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:40:58.611623    4108 retry.go:31] will retry after 519.781766ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 10:40 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 10:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 10:40 test-1759142458278378914
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh cat /mount-9p/test-1759142458278378914
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-599498 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [62fc0169-de4f-404e-b8f5-27813ff8927d] Pending
helpers_test.go:352: "busybox-mount" [62fc0169-de4f-404e-b8f5-27813ff8927d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [62fc0169-de4f-404e-b8f5-27813ff8927d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [62fc0169-de4f-404e-b8f5-27813ff8927d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00304065s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-599498 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-599498 /tmp/TestFunctionalparallelMountCmdany-port2124093550/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-599498 /tmp/TestFunctionalparallelMountCmdspecific-port1452337307/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (365.501244ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:41:07.770169    4108 retry.go:31] will retry after 252.945374ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-599498 /tmp/TestFunctionalparallelMountCmdspecific-port1452337307/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 ssh "sudo umount -f /mount-9p": exit status 1 (318.995158ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-599498 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-599498 /tmp/TestFunctionalparallelMountCmdspecific-port1452337307/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-599498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2177905651/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-599498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2177905651/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-599498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2177905651/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-599498 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-599498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2177905651/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-599498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2177905651/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-599498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2177905651/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 service list -o json
functional_test.go:1504: Took "640.19317ms" to run "out/minikube-linux-arm64 -p functional-599498 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-599498 version -o=json --components: (1.372266069s)
--- PASS: TestFunctional/parallel/Version/components (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-599498 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-599498
localhost/kicbase/echo-server:functional-599498
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-599498 image ls --format short --alsologtostderr:
I0929 10:41:27.480259   35403 out.go:360] Setting OutFile to fd 1 ...
I0929 10:41:27.480470   35403 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:41:27.480497   35403 out.go:374] Setting ErrFile to fd 2...
I0929 10:41:27.480518   35403 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:41:27.480801   35403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
I0929 10:41:27.481438   35403 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:41:27.481600   35403 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:41:27.482107   35403 cli_runner.go:164] Run: docker container inspect functional-599498 --format={{.State.Status}}
I0929 10:41:27.504061   35403 ssh_runner.go:195] Run: systemctl --version
I0929 10:41:27.504281   35403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
I0929 10:41:27.526329   35403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
I0929 10:41:27.619526   35403 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-599498 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ a25f5ef9c34c3 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ 6fc32d66c1411 │ 75.9MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ docker.io/library/nginx                 │ alpine             │ 35f3cbee4fb77 │ 54.3MB │
│ localhost/kicbase/echo-server           │ functional-599498  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test     │ functional-599498  │ 2be8fa298083c │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ latest             │ 17848b7d08d19 │ 202MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ d291939e99406 │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ 996be7e86d9b3 │ 72.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-599498 image ls --format table --alsologtostderr:
I0929 10:41:28.354575   35594 out.go:360] Setting OutFile to fd 1 ...
I0929 10:41:28.354730   35594 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:41:28.354762   35594 out.go:374] Setting ErrFile to fd 2...
I0929 10:41:28.354775   35594 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:41:28.355047   35594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
I0929 10:41:28.355739   35594 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:41:28.355902   35594 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:41:28.356395   35594 cli_runner.go:164] Run: docker container inspect functional-599498 --format={{.State.Status}}
I0929 10:41:28.376583   35594 ssh_runner.go:195] Run: systemctl --version
I0929 10:41:28.376643   35594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
I0929 10:41:28.402454   35594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
I0929 10:41:28.504451   35594 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-599498 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6b
c340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"51592021"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5
800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"17848b7d08d196d4e7b420f48ba286132a07937574561d4a6c085651f5177f97","repoDigests":["docker.io/library/nginx@sha256:059ceb5a1ded7032703d6290061911adc8a9c55298f372daaf63801600ec894e","docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e"],"repoTags":["docker.io/library/nginx:latest"],"size":"202036629"},{"id":"2be8fa298083c3aec831bdb65ad51056eff5d2c4f43e6af3f4f3a5f4542c21ed","repoDigests":["localhost/minikube-local-cache-test@sha256:9ae736f9f1e2671ae016b16e4578895c49426927e390f26c7f2197d84036c148"],"repoTags":["localhost/minikube-local-cache-test:functional-599498"],"size":"3328"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["r
egistry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"72629077"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@s
ha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"84818927"},{"id":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a46
75abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"75938711"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54348302"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-599498"],"size":"4788229"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/p
ause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-599498 image ls --format json --alsologtostderr:
I0929 10:41:28.063422   35511 out.go:360] Setting OutFile to fd 1 ...
I0929 10:41:28.064157   35511 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:41:28.064201   35511 out.go:374] Setting ErrFile to fd 2...
I0929 10:41:28.064222   35511 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:41:28.064522   35511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
I0929 10:41:28.065610   35511 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:41:28.066009   35511 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:41:28.067797   35511 cli_runner.go:164] Run: docker container inspect functional-599498 --format={{.State.Status}}
I0929 10:41:28.092607   35511 ssh_runner.go:195] Run: systemctl --version
I0929 10:41:28.092662   35511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
I0929 10:41:28.115362   35511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
I0929 10:41:28.211511   35511 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-599498 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 17848b7d08d196d4e7b420f48ba286132a07937574561d4a6c085651f5177f97
repoDigests:
- docker.io/library/nginx@sha256:059ceb5a1ded7032703d6290061911adc8a9c55298f372daaf63801600ec894e
- docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e
repoTags:
- docker.io/library/nginx:latest
size: "202036629"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-599498
size: "4788229"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "72629077"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "75938711"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "51592021"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac
repoTags:
- docker.io/library/nginx:alpine
size: "54348302"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 2be8fa298083c3aec831bdb65ad51056eff5d2c4f43e6af3f4f3a5f4542c21ed
repoDigests:
- localhost/minikube-local-cache-test@sha256:9ae736f9f1e2671ae016b16e4578895c49426927e390f26c7f2197d84036c148
repoTags:
- localhost/minikube-local-cache-test:functional-599498
size: "3328"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "84818927"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-599498 image ls --format yaml --alsologtostderr:
I0929 10:41:27.725904   35434 out.go:360] Setting OutFile to fd 1 ...
I0929 10:41:27.726207   35434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:41:27.726236   35434 out.go:374] Setting ErrFile to fd 2...
I0929 10:41:27.726257   35434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:41:27.726551   35434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
I0929 10:41:27.727259   35434 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:41:27.727446   35434 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:41:27.727959   35434 cli_runner.go:164] Run: docker container inspect functional-599498 --format={{.State.Status}}
I0929 10:41:27.745940   35434 ssh_runner.go:195] Run: systemctl --version
I0929 10:41:27.745996   35434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
I0929 10:41:27.778573   35434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
I0929 10:41:27.883395   35434 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-599498 ssh pgrep buildkitd: exit status 1 (324.202941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image build -t localhost/my-image:functional-599498 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-599498 image build -t localhost/my-image:functional-599498 testdata/build --alsologtostderr: (3.388593372s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-599498 image build -t localhost/my-image:functional-599498 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8c33466f656
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-599498
--> 12828ae708f
Successfully tagged localhost/my-image:functional-599498
12828ae708f60bc6d985f754a62e55fc6b5cc280d4d75a00e74d3734fa09a09d
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-599498 image build -t localhost/my-image:functional-599498 testdata/build --alsologtostderr:
I0929 10:41:28.228300   35566 out.go:360] Setting OutFile to fd 1 ...
I0929 10:41:28.229326   35566 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:41:28.229372   35566 out.go:374] Setting ErrFile to fd 2...
I0929 10:41:28.229393   35566 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:41:28.229708   35566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
I0929 10:41:28.230354   35566 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:41:28.231793   35566 config.go:182] Loaded profile config "functional-599498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:41:28.232368   35566 cli_runner.go:164] Run: docker container inspect functional-599498 --format={{.State.Status}}
I0929 10:41:28.272798   35566 ssh_runner.go:195] Run: systemctl --version
I0929 10:41:28.272853   35566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-599498
I0929 10:41:28.301316   35566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/functional-599498/id_rsa Username:docker}
I0929 10:41:28.401318   35566 build_images.go:161] Building image from path: /tmp/build.2778603695.tar
I0929 10:41:28.401407   35566 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 10:41:28.412259   35566 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2778603695.tar
I0929 10:41:28.420021   35566 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2778603695.tar: stat -c "%s %y" /var/lib/minikube/build/build.2778603695.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2778603695.tar': No such file or directory
I0929 10:41:28.420057   35566 ssh_runner.go:362] scp /tmp/build.2778603695.tar --> /var/lib/minikube/build/build.2778603695.tar (3072 bytes)
I0929 10:41:28.446700   35566 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2778603695
I0929 10:41:28.457073   35566 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2778603695 -xf /var/lib/minikube/build/build.2778603695.tar
I0929 10:41:28.466746   35566 crio.go:315] Building image: /var/lib/minikube/build/build.2778603695
I0929 10:41:28.466821   35566 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-599498 /var/lib/minikube/build/build.2778603695 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0929 10:41:31.535704   35566 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-599498 /var/lib/minikube/build/build.2778603695 --cgroup-manager=cgroupfs: (3.068854192s)
I0929 10:41:31.535772   35566 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2778603695
I0929 10:41:31.544668   35566 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2778603695.tar
I0929 10:41:31.553620   35566 build_images.go:217] Built localhost/my-image:functional-599498 from /tmp/build.2778603695.tar
I0929 10:41:31.553651   35566 build_images.go:133] succeeded building to: functional-599498
I0929 10:41:31.553657   35566 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-599498
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image load --daemon kicbase/echo-server:functional-599498 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-599498 image load --daemon kicbase/echo-server:functional-599498 --alsologtostderr: (2.973543034s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image load --daemon kicbase/echo-server:functional-599498 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-599498
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image load --daemon kicbase/echo-server:functional-599498 --alsologtostderr
2025/09/29 10:41:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image save kicbase/echo-server:functional-599498 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image rm kicbase/echo-server:functional-599498 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-599498
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-599498 image save --daemon kicbase/echo-server:functional-599498 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-599498
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-599498
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-599498
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-599498
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0929 10:43:10.545605    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:44:33.615865    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m19.497576522s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (200.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 kubectl -- rollout status deployment/busybox: (5.706606887s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-2csmz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-fz95f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-mtfml -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-2csmz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-fz95f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-mtfml -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-2csmz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-fz95f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-mtfml -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-2csmz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-2csmz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-fz95f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-fz95f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-mtfml -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 kubectl -- exec busybox-7b57f96db7-mtfml -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 node add --alsologtostderr -v 5
E0929 10:45:42.963729    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:45:42.970208    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:45:42.981647    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:45:43.003190    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:45:43.044617    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:45:43.126101    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:45:43.287533    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:45:43.609280    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:45:44.251246    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:45:45.533315    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:45:48.095174    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:45:53.217037    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 node add --alsologtostderr -v 5: (58.162622868s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5
E0929 10:46:03.458922    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5: (1.009997143s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-204623 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.063859676s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp testdata/cp-test.txt ha-204623:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2903457524/001/cp-test_ha-204623.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623:/home/docker/cp-test.txt ha-204623-m02:/home/docker/cp-test_ha-204623_ha-204623-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m02 "sudo cat /home/docker/cp-test_ha-204623_ha-204623-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623:/home/docker/cp-test.txt ha-204623-m03:/home/docker/cp-test_ha-204623_ha-204623-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m03 "sudo cat /home/docker/cp-test_ha-204623_ha-204623-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623:/home/docker/cp-test.txt ha-204623-m04:/home/docker/cp-test_ha-204623_ha-204623-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m04 "sudo cat /home/docker/cp-test_ha-204623_ha-204623-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp testdata/cp-test.txt ha-204623-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2903457524/001/cp-test_ha-204623-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m02:/home/docker/cp-test.txt ha-204623:/home/docker/cp-test_ha-204623-m02_ha-204623.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623 "sudo cat /home/docker/cp-test_ha-204623-m02_ha-204623.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m02:/home/docker/cp-test.txt ha-204623-m03:/home/docker/cp-test_ha-204623-m02_ha-204623-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m03 "sudo cat /home/docker/cp-test_ha-204623-m02_ha-204623-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m02:/home/docker/cp-test.txt ha-204623-m04:/home/docker/cp-test_ha-204623-m02_ha-204623-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m04 "sudo cat /home/docker/cp-test_ha-204623-m02_ha-204623-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp testdata/cp-test.txt ha-204623-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2903457524/001/cp-test_ha-204623-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m03:/home/docker/cp-test.txt ha-204623:/home/docker/cp-test_ha-204623-m03_ha-204623.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623 "sudo cat /home/docker/cp-test_ha-204623-m03_ha-204623.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m03:/home/docker/cp-test.txt ha-204623-m02:/home/docker/cp-test_ha-204623-m03_ha-204623-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m02 "sudo cat /home/docker/cp-test_ha-204623-m03_ha-204623-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m03:/home/docker/cp-test.txt ha-204623-m04:/home/docker/cp-test_ha-204623-m03_ha-204623-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m04 "sudo cat /home/docker/cp-test_ha-204623-m03_ha-204623-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp testdata/cp-test.txt ha-204623-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2903457524/001/cp-test_ha-204623-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m04:/home/docker/cp-test.txt ha-204623:/home/docker/cp-test_ha-204623-m04_ha-204623.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623 "sudo cat /home/docker/cp-test_ha-204623-m04_ha-204623.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m04:/home/docker/cp-test.txt ha-204623-m02:/home/docker/cp-test_ha-204623-m04_ha-204623-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m02 "sudo cat /home/docker/cp-test_ha-204623-m04_ha-204623-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 cp ha-204623-m04:/home/docker/cp-test.txt ha-204623-m03:/home/docker/cp-test_ha-204623-m04_ha-204623-m03.txt
E0929 10:46:23.940670    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 ssh -n ha-204623-m03 "sudo cat /home/docker/cp-test_ha-204623-m04_ha-204623-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 node stop m02 --alsologtostderr -v 5: (11.97214577s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5: exit status 7 (763.880298ms)

                                                
                                                
-- stdout --
	ha-204623
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-204623-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-204623-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-204623-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:46:37.066063   51440 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:46:37.066247   51440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:46:37.066276   51440 out.go:374] Setting ErrFile to fd 2...
	I0929 10:46:37.066295   51440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:46:37.066647   51440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	I0929 10:46:37.066896   51440 out.go:368] Setting JSON to false
	I0929 10:46:37.066962   51440 mustload.go:65] Loading cluster: ha-204623
	I0929 10:46:37.067034   51440 notify.go:220] Checking for updates...
	I0929 10:46:37.068295   51440 config.go:182] Loaded profile config "ha-204623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:46:37.068350   51440 status.go:174] checking status of ha-204623 ...
	I0929 10:46:37.069010   51440 cli_runner.go:164] Run: docker container inspect ha-204623 --format={{.State.Status}}
	I0929 10:46:37.088141   51440 status.go:371] ha-204623 host status = "Running" (err=<nil>)
	I0929 10:46:37.088163   51440 host.go:66] Checking if "ha-204623" exists ...
	I0929 10:46:37.088610   51440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-204623
	I0929 10:46:37.124601   51440 host.go:66] Checking if "ha-204623" exists ...
	I0929 10:46:37.124892   51440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:46:37.124933   51440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-204623
	I0929 10:46:37.154378   51440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/ha-204623/id_rsa Username:docker}
	I0929 10:46:37.252512   51440 ssh_runner.go:195] Run: systemctl --version
	I0929 10:46:37.256712   51440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:46:37.268400   51440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:46:37.329940   51440 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-29 10:46:37.319190186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 10:46:37.330649   51440 kubeconfig.go:125] found "ha-204623" server: "https://192.168.49.254:8443"
	I0929 10:46:37.330686   51440 api_server.go:166] Checking apiserver status ...
	I0929 10:46:37.330738   51440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:46:37.341501   51440 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1448/cgroup
	I0929 10:46:37.350794   51440 api_server.go:182] apiserver freezer: "13:freezer:/docker/388be365593ad57597f69a05f76eb0e730b16f44aac0231bb20a9eb817c3edc4/crio/crio-14f5e64f6d8cf9db71888cbd68742b68f9f6563f1076f219268e03cac9a7a166"
	I0929 10:46:37.350860   51440 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/388be365593ad57597f69a05f76eb0e730b16f44aac0231bb20a9eb817c3edc4/crio/crio-14f5e64f6d8cf9db71888cbd68742b68f9f6563f1076f219268e03cac9a7a166/freezer.state
	I0929 10:46:37.359705   51440 api_server.go:204] freezer state: "THAWED"
	I0929 10:46:37.359744   51440 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 10:46:37.367961   51440 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 10:46:37.367994   51440 status.go:463] ha-204623 apiserver status = Running (err=<nil>)
	I0929 10:46:37.368006   51440 status.go:176] ha-204623 status: &{Name:ha-204623 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:46:37.368024   51440 status.go:174] checking status of ha-204623-m02 ...
	I0929 10:46:37.368320   51440 cli_runner.go:164] Run: docker container inspect ha-204623-m02 --format={{.State.Status}}
	I0929 10:46:37.385632   51440 status.go:371] ha-204623-m02 host status = "Stopped" (err=<nil>)
	I0929 10:46:37.385658   51440 status.go:384] host is not running, skipping remaining checks
	I0929 10:46:37.385665   51440 status.go:176] ha-204623-m02 status: &{Name:ha-204623-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:46:37.385685   51440 status.go:174] checking status of ha-204623-m03 ...
	I0929 10:46:37.386009   51440 cli_runner.go:164] Run: docker container inspect ha-204623-m03 --format={{.State.Status}}
	I0929 10:46:37.404388   51440 status.go:371] ha-204623-m03 host status = "Running" (err=<nil>)
	I0929 10:46:37.404444   51440 host.go:66] Checking if "ha-204623-m03" exists ...
	I0929 10:46:37.404820   51440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-204623-m03
	I0929 10:46:37.423078   51440 host.go:66] Checking if "ha-204623-m03" exists ...
	I0929 10:46:37.423471   51440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:46:37.423526   51440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-204623-m03
	I0929 10:46:37.440650   51440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/ha-204623-m03/id_rsa Username:docker}
	I0929 10:46:37.536723   51440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:46:37.550083   51440 kubeconfig.go:125] found "ha-204623" server: "https://192.168.49.254:8443"
	I0929 10:46:37.550114   51440 api_server.go:166] Checking apiserver status ...
	I0929 10:46:37.550164   51440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:46:37.561309   51440 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1360/cgroup
	I0929 10:46:37.570983   51440 api_server.go:182] apiserver freezer: "13:freezer:/docker/f8ac1e21bac8e81eb02d73ec00a13035769256858667b3de350ce6d6d3e5242c/crio/crio-f65603f012db826a0647e9fb4b636b990d225aa9929041a3b4133d07c808d366"
	I0929 10:46:37.571049   51440 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f8ac1e21bac8e81eb02d73ec00a13035769256858667b3de350ce6d6d3e5242c/crio/crio-f65603f012db826a0647e9fb4b636b990d225aa9929041a3b4133d07c808d366/freezer.state
	I0929 10:46:37.579579   51440 api_server.go:204] freezer state: "THAWED"
	I0929 10:46:37.579608   51440 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 10:46:37.588577   51440 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 10:46:37.588646   51440 status.go:463] ha-204623-m03 apiserver status = Running (err=<nil>)
	I0929 10:46:37.588670   51440 status.go:176] ha-204623-m03 status: &{Name:ha-204623-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:46:37.588761   51440 status.go:174] checking status of ha-204623-m04 ...
	I0929 10:46:37.589077   51440 cli_runner.go:164] Run: docker container inspect ha-204623-m04 --format={{.State.Status}}
	I0929 10:46:37.614576   51440 status.go:371] ha-204623-m04 host status = "Running" (err=<nil>)
	I0929 10:46:37.614602   51440 host.go:66] Checking if "ha-204623-m04" exists ...
	I0929 10:46:37.614886   51440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-204623-m04
	I0929 10:46:37.636763   51440 host.go:66] Checking if "ha-204623-m04" exists ...
	I0929 10:46:37.637056   51440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:46:37.637100   51440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-204623-m04
	I0929 10:46:37.658881   51440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/ha-204623-m04/id_rsa Username:docker}
	I0929 10:46:37.756846   51440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:46:37.769349   51440 status.go:176] ha-204623-m04 status: &{Name:ha-204623-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 node start m02 --alsologtostderr -v 5
E0929 10:47:04.901976    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 node start m02 --alsologtostderr -v 5: (34.08815223s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5: (1.236044035s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.169048588s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (124.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 stop --alsologtostderr -v 5: (26.183954184s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 start --wait true --alsologtostderr -v 5
E0929 10:48:10.547277    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:48:26.824055    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 start --wait true --alsologtostderr -v 5: (1m38.228526249s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (124.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 node delete m03 --alsologtostderr -v 5: (11.456363034s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 stop --alsologtostderr -v 5: (35.400719803s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5: exit status 7 (118.836914ms)

                                                
                                                
-- stdout --
	ha-204623
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-204623-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-204623-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:50:08.304014   65116 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:50:08.304218   65116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:50:08.304270   65116 out.go:374] Setting ErrFile to fd 2...
	I0929 10:50:08.304288   65116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:50:08.304569   65116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	I0929 10:50:08.304807   65116 out.go:368] Setting JSON to false
	I0929 10:50:08.304878   65116 mustload.go:65] Loading cluster: ha-204623
	I0929 10:50:08.304945   65116 notify.go:220] Checking for updates...
	I0929 10:50:08.305889   65116 config.go:182] Loaded profile config "ha-204623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:50:08.305946   65116 status.go:174] checking status of ha-204623 ...
	I0929 10:50:08.306592   65116 cli_runner.go:164] Run: docker container inspect ha-204623 --format={{.State.Status}}
	I0929 10:50:08.326383   65116 status.go:371] ha-204623 host status = "Stopped" (err=<nil>)
	I0929 10:50:08.326406   65116 status.go:384] host is not running, skipping remaining checks
	I0929 10:50:08.326413   65116 status.go:176] ha-204623 status: &{Name:ha-204623 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:50:08.326450   65116 status.go:174] checking status of ha-204623-m02 ...
	I0929 10:50:08.326749   65116 cli_runner.go:164] Run: docker container inspect ha-204623-m02 --format={{.State.Status}}
	I0929 10:50:08.357338   65116 status.go:371] ha-204623-m02 host status = "Stopped" (err=<nil>)
	I0929 10:50:08.357359   65116 status.go:384] host is not running, skipping remaining checks
	I0929 10:50:08.357365   65116 status.go:176] ha-204623-m02 status: &{Name:ha-204623-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:50:08.357384   65116 status.go:174] checking status of ha-204623-m04 ...
	I0929 10:50:08.357683   65116 cli_runner.go:164] Run: docker container inspect ha-204623-m04 --format={{.State.Status}}
	I0929 10:50:08.374696   65116 status.go:371] ha-204623-m04 host status = "Stopped" (err=<nil>)
	I0929 10:50:08.374716   65116 status.go:384] host is not running, skipping remaining checks
	I0929 10:50:08.374734   65116 status.go:176] ha-204623-m04 status: &{Name:ha-204623-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0929 10:50:42.963248    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:51:10.665652    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m17.362496731s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 node add --control-plane --alsologtostderr -v 5: (1m20.692146456s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-204623 status --alsologtostderr -v 5: (1.04987432s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.076225096s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-771536 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0929 10:53:10.545619    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-771536 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (48.345523127s)
--- PASS: TestJSONOutput/start/Command (48.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-771536 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-771536 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-771536 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-771536 --output=json --user=testUser: (5.819402596s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-829843 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-829843 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.856105ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"73172d14-5386-47e4-9c4d-3571c806b7d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-829843] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f219133-0f05-4a1e-bcfa-684d47eeec73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21657"}}
	{"specversion":"1.0","id":"ca75859f-ba33-49a1-8288-75afde6608a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"659b8791-4ade-4b81-93cc-4f7179025970","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig"}}
	{"specversion":"1.0","id":"ec45d401-6de7-4314-9be2-994770ae9991","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube"}}
	{"specversion":"1.0","id":"0a252cb4-983d-44c6-b080-c82affc649da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"85b33085-75ab-494a-b1dd-c70d90353152","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"47a6c772-d0c7-406a-b090-507aab9f86d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-829843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-829843
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-020451 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-020451 --network=: (36.875689865s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-020451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-020451
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-020451: (2.017038291s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-922364 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-922364 --network=bridge: (29.52042279s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-922364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-922364
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-922364: (1.986703549s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.54s)

                                                
                                    
x
+
TestKicExistingNetwork (37.49s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0929 10:55:11.246790    4108 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0929 10:55:11.263031    4108 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0929 10:55:11.263106    4108 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0929 10:55:11.263122    4108 cli_runner.go:164] Run: docker network inspect existing-network
W0929 10:55:11.279227    4108 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0929 10:55:11.279257    4108 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0929 10:55:11.279271    4108 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0929 10:55:11.279392    4108 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0929 10:55:11.296668    4108 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-67aad7d52d6a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:8f:bc:2b:fd:58} reservation:<nil>}
I0929 10:55:11.296965    4108 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400174b7d0}
I0929 10:55:11.297618    4108 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0929 10:55:11.297719    4108 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0929 10:55:11.354986    4108 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-012562 --network=existing-network
E0929 10:55:42.963248    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-012562 --network=existing-network: (35.331628963s)
helpers_test.go:175: Cleaning up "existing-network-012562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-012562
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-012562: (2.015721786s)
I0929 10:55:48.718789    4108 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.49s)

                                                
                                    
x
+
TestKicCustomSubnet (33.75s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-732047 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-732047 --subnet=192.168.60.0/24: (31.56028135s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-732047 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-732047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-732047
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-732047: (2.155308885s)
--- PASS: TestKicCustomSubnet (33.75s)

                                                
                                    
x
+
TestKicStaticIP (38.75s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-068281 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-068281 --static-ip=192.168.200.200: (36.464089822s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-068281 ip
helpers_test.go:175: Cleaning up "static-ip-068281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-068281
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-068281: (2.136741778s)
--- PASS: TestKicStaticIP (38.75s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-572038 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-572038 --driver=docker  --container-runtime=crio: (31.806277473s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-574803 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-574803 --driver=docker  --container-runtime=crio: (34.406648088s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-572038
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-574803
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-574803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-574803
E0929 10:58:10.545355    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-574803: (2.026873664s)
helpers_test.go:175: Cleaning up "first-572038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-572038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-572038: (1.921973684s)
--- PASS: TestMinikubeProfile (71.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-093595 --memory=3072 --mount-string /tmp/TestMountStartserial763343534/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-093595 --memory=3072 --mount-string /tmp/TestMountStartserial763343534/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.878442615s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-093595 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-095474 --memory=3072 --mount-string /tmp/TestMountStartserial763343534/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-095474 --memory=3072 --mount-string /tmp/TestMountStartserial763343534/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.505044523s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-095474 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-093595 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-093595 --alsologtostderr -v=5: (1.617073844s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-095474 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-095474
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-095474: (1.196593706s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-095474
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-095474: (7.148427152s)
--- PASS: TestMountStart/serial/RestartStopped (8.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-095474 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-464014 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-464014 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m44.897099416s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-464014 -- rollout status deployment/busybox: (4.603713778s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- exec busybox-7b57f96db7-b72h7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- exec busybox-7b57f96db7-wmm6q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- exec busybox-7b57f96db7-b72h7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- exec busybox-7b57f96db7-wmm6q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- exec busybox-7b57f96db7-b72h7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- exec busybox-7b57f96db7-wmm6q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- exec busybox-7b57f96db7-b72h7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- exec busybox-7b57f96db7-b72h7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- exec busybox-7b57f96db7-wmm6q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-464014 -- exec busybox-7b57f96db7-wmm6q -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-464014 -v=5 --alsologtostderr
E0929 11:00:42.963154    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:13.618245    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-464014 -v=5 --alsologtostderr: (55.810655828s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-464014 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp testdata/cp-test.txt multinode-464014:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp multinode-464014:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3763922174/001/cp-test_multinode-464014.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp multinode-464014:/home/docker/cp-test.txt multinode-464014-m02:/home/docker/cp-test_multinode-464014_multinode-464014-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m02 "sudo cat /home/docker/cp-test_multinode-464014_multinode-464014-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp multinode-464014:/home/docker/cp-test.txt multinode-464014-m03:/home/docker/cp-test_multinode-464014_multinode-464014-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m03 "sudo cat /home/docker/cp-test_multinode-464014_multinode-464014-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp testdata/cp-test.txt multinode-464014-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp multinode-464014-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3763922174/001/cp-test_multinode-464014-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp multinode-464014-m02:/home/docker/cp-test.txt multinode-464014:/home/docker/cp-test_multinode-464014-m02_multinode-464014.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014 "sudo cat /home/docker/cp-test_multinode-464014-m02_multinode-464014.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp multinode-464014-m02:/home/docker/cp-test.txt multinode-464014-m03:/home/docker/cp-test_multinode-464014-m02_multinode-464014-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m03 "sudo cat /home/docker/cp-test_multinode-464014-m02_multinode-464014-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp testdata/cp-test.txt multinode-464014-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp multinode-464014-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3763922174/001/cp-test_multinode-464014-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp multinode-464014-m03:/home/docker/cp-test.txt multinode-464014:/home/docker/cp-test_multinode-464014-m03_multinode-464014.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014 "sudo cat /home/docker/cp-test_multinode-464014-m03_multinode-464014.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 cp multinode-464014-m03:/home/docker/cp-test.txt multinode-464014-m02:/home/docker/cp-test_multinode-464014-m03_multinode-464014-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 ssh -n multinode-464014-m02 "sudo cat /home/docker/cp-test_multinode-464014-m03_multinode-464014-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-464014 node stop m03: (1.21426756s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-464014 status: exit status 7 (568.156948ms)

                                                
                                                
-- stdout --
	multinode-464014
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-464014-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-464014-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-464014 status --alsologtostderr: exit status 7 (521.184242ms)

                                                
                                                
-- stdout --
	multinode-464014
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-464014-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-464014-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:01:42.457763  118327 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:01:42.457973  118327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:01:42.457999  118327 out.go:374] Setting ErrFile to fd 2...
	I0929 11:01:42.458025  118327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:01:42.458344  118327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	I0929 11:01:42.458579  118327 out.go:368] Setting JSON to false
	I0929 11:01:42.458646  118327 mustload.go:65] Loading cluster: multinode-464014
	I0929 11:01:42.458718  118327 notify.go:220] Checking for updates...
	I0929 11:01:42.459860  118327 config.go:182] Loaded profile config "multinode-464014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:01:42.459933  118327 status.go:174] checking status of multinode-464014 ...
	I0929 11:01:42.460609  118327 cli_runner.go:164] Run: docker container inspect multinode-464014 --format={{.State.Status}}
	I0929 11:01:42.481043  118327 status.go:371] multinode-464014 host status = "Running" (err=<nil>)
	I0929 11:01:42.481071  118327 host.go:66] Checking if "multinode-464014" exists ...
	I0929 11:01:42.481416  118327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-464014
	I0929 11:01:42.508758  118327 host.go:66] Checking if "multinode-464014" exists ...
	I0929 11:01:42.509051  118327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:01:42.509106  118327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-464014
	I0929 11:01:42.535322  118327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/multinode-464014/id_rsa Username:docker}
	I0929 11:01:42.632324  118327 ssh_runner.go:195] Run: systemctl --version
	I0929 11:01:42.636587  118327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:01:42.647733  118327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:01:42.703261  118327 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-29 11:01:42.693677337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:01:42.703817  118327 kubeconfig.go:125] found "multinode-464014" server: "https://192.168.67.2:8443"
	I0929 11:01:42.703866  118327 api_server.go:166] Checking apiserver status ...
	I0929 11:01:42.703923  118327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:01:42.714942  118327 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	I0929 11:01:42.724637  118327 api_server.go:182] apiserver freezer: "13:freezer:/docker/89023525686c80e59d85997612a45c5bf141e5f379a2c9a997f0e1bb9c3d9cb7/crio/crio-7b5184005212ce7a80173101ac719f3b17c9fe700dd48f01e840a0706b0ab2e3"
	I0929 11:01:42.724707  118327 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/89023525686c80e59d85997612a45c5bf141e5f379a2c9a997f0e1bb9c3d9cb7/crio/crio-7b5184005212ce7a80173101ac719f3b17c9fe700dd48f01e840a0706b0ab2e3/freezer.state
	I0929 11:01:42.734254  118327 api_server.go:204] freezer state: "THAWED"
	I0929 11:01:42.734286  118327 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0929 11:01:42.742817  118327 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0929 11:01:42.742847  118327 status.go:463] multinode-464014 apiserver status = Running (err=<nil>)
	I0929 11:01:42.742858  118327 status.go:176] multinode-464014 status: &{Name:multinode-464014 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:01:42.742873  118327 status.go:174] checking status of multinode-464014-m02 ...
	I0929 11:01:42.743281  118327 cli_runner.go:164] Run: docker container inspect multinode-464014-m02 --format={{.State.Status}}
	I0929 11:01:42.760435  118327 status.go:371] multinode-464014-m02 host status = "Running" (err=<nil>)
	I0929 11:01:42.760459  118327 host.go:66] Checking if "multinode-464014-m02" exists ...
	I0929 11:01:42.760756  118327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-464014-m02
	I0929 11:01:42.778216  118327 host.go:66] Checking if "multinode-464014-m02" exists ...
	I0929 11:01:42.778529  118327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:01:42.778574  118327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-464014-m02
	I0929 11:01:42.796346  118327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21657-2306/.minikube/machines/multinode-464014-m02/id_rsa Username:docker}
	I0929 11:01:42.892314  118327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:01:42.904058  118327 status.go:176] multinode-464014-m02 status: &{Name:multinode-464014-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:01:42.904105  118327 status.go:174] checking status of multinode-464014-m03 ...
	I0929 11:01:42.904400  118327 cli_runner.go:164] Run: docker container inspect multinode-464014-m03 --format={{.State.Status}}
	I0929 11:01:42.926131  118327 status.go:371] multinode-464014-m03 host status = "Stopped" (err=<nil>)
	I0929 11:01:42.926154  118327 status.go:384] host is not running, skipping remaining checks
	I0929 11:01:42.926160  118327 status.go:176] multinode-464014-m03 status: &{Name:multinode-464014-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-464014 node start m03 -v=5 --alsologtostderr: (7.249640311s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-464014
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-464014
E0929 11:02:06.027059    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-464014: (24.759604426s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-464014 --wait=true -v=5 --alsologtostderr
E0929 11:03:10.545636    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-464014 --wait=true -v=5 --alsologtostderr: (56.406628019s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-464014
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-464014 node delete m03: (4.914832995s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-464014 stop: (23.577006736s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-464014 status: exit status 7 (90.087674ms)

                                                
                                                
-- stdout --
	multinode-464014
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-464014-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-464014 status --alsologtostderr: exit status 7 (92.614764ms)

                                                
                                                
-- stdout --
	multinode-464014
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-464014-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:03:41.577787  126203 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:03:41.577994  126203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:03:41.578020  126203 out.go:374] Setting ErrFile to fd 2...
	I0929 11:03:41.578040  126203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:03:41.578317  126203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	I0929 11:03:41.578544  126203 out.go:368] Setting JSON to false
	I0929 11:03:41.578603  126203 mustload.go:65] Loading cluster: multinode-464014
	I0929 11:03:41.578685  126203 notify.go:220] Checking for updates...
	I0929 11:03:41.579662  126203 config.go:182] Loaded profile config "multinode-464014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:03:41.579717  126203 status.go:174] checking status of multinode-464014 ...
	I0929 11:03:41.580267  126203 cli_runner.go:164] Run: docker container inspect multinode-464014 --format={{.State.Status}}
	I0929 11:03:41.598726  126203 status.go:371] multinode-464014 host status = "Stopped" (err=<nil>)
	I0929 11:03:41.598746  126203 status.go:384] host is not running, skipping remaining checks
	I0929 11:03:41.598752  126203 status.go:176] multinode-464014 status: &{Name:multinode-464014 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:03:41.598783  126203 status.go:174] checking status of multinode-464014-m02 ...
	I0929 11:03:41.599069  126203 cli_runner.go:164] Run: docker container inspect multinode-464014-m02 --format={{.State.Status}}
	I0929 11:03:41.627573  126203 status.go:371] multinode-464014-m02 host status = "Stopped" (err=<nil>)
	I0929 11:03:41.627597  126203 status.go:384] host is not running, skipping remaining checks
	I0929 11:03:41.627603  126203 status.go:176] multinode-464014-m02 status: &{Name:multinode-464014-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-464014 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-464014 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (55.738093771s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-464014 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.43s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-464014
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-464014-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-464014-m02 --driver=docker  --container-runtime=crio: exit status 14 (93.473423ms)

                                                
                                                
-- stdout --
	* [multinode-464014-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-464014-m02' is duplicated with machine name 'multinode-464014-m02' in profile 'multinode-464014'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-464014-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-464014-m03 --driver=docker  --container-runtime=crio: (33.921621024s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-464014
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-464014: exit status 80 (327.192294ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-464014 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-464014-m03 already exists in multinode-464014-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-464014-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-464014-m03: (1.920871695s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.32s)

                                                
                                    
x
+
TestPreload (132.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-252399 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0929 11:05:42.963782    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-252399 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m0.070547059s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-252399 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-252399 image pull gcr.io/k8s-minikube/busybox: (3.475340729s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-252399
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-252399: (5.800314093s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-252399 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-252399 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m0.723490965s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-252399 image list
helpers_test.go:175: Cleaning up "test-preload-252399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-252399
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-252399: (2.29939421s)
--- PASS: TestPreload (132.61s)

                                                
                                    
x
+
TestScheduledStopUnix (109.33s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-492776 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-492776 --memory=3072 --driver=docker  --container-runtime=crio: (33.28740792s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-492776 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-492776 -n scheduled-stop-492776
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-492776 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 11:08:04.867191    4108 retry.go:31] will retry after 87.155µs: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.867351    4108 retry.go:31] will retry after 132.956µs: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.867630    4108 retry.go:31] will retry after 276.327µs: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.868438    4108 retry.go:31] will retry after 454.284µs: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.869515    4108 retry.go:31] will retry after 389.221µs: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.870598    4108 retry.go:31] will retry after 803.077µs: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.871727    4108 retry.go:31] will retry after 1.66027ms: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.873931    4108 retry.go:31] will retry after 1.789745ms: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.876182    4108 retry.go:31] will retry after 3.783634ms: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.880399    4108 retry.go:31] will retry after 4.79689ms: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.885605    4108 retry.go:31] will retry after 7.731468ms: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.893852    4108 retry.go:31] will retry after 8.311884ms: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.903094    4108 retry.go:31] will retry after 9.569509ms: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.913421    4108 retry.go:31] will retry after 10.287271ms: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.924563    4108 retry.go:31] will retry after 15.110763ms: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
I0929 11:08:04.940789    4108 retry.go:31] will retry after 45.969703ms: open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/scheduled-stop-492776/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-492776 --cancel-scheduled
E0929 11:08:10.545171    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-492776 -n scheduled-stop-492776
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-492776
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-492776 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-492776
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-492776: exit status 7 (66.985063ms)

                                                
                                                
-- stdout --
	scheduled-stop-492776
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-492776 -n scheduled-stop-492776
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-492776 -n scheduled-stop-492776: exit status 7 (66.09097ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-492776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-492776
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-492776: (4.436474464s)
--- PASS: TestScheduledStopUnix (109.33s)

                                                
                                    
x
+
TestInsufficientStorage (10.99s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-475938 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-475938 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.487830457s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9fd05bb3-d90b-49fa-b144-533fff9858dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-475938] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"54965f14-5b2b-4809-b0d5-3eac421d6e03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21657"}}
	{"specversion":"1.0","id":"85916fc8-6b06-45a1-b94c-9135acc6450a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c8cfa414-8292-4415-b60d-360dfb219f7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig"}}
	{"specversion":"1.0","id":"f30e20c4-f5b7-4521-982e-7c76bce29c71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube"}}
	{"specversion":"1.0","id":"0c461f70-7750-440a-a857-6f979e0e5fcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6d643596-7f0c-4564-8451-c9859eb8d95a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"df5dcf47-9177-43a2-b78d-c276e79bae18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5612f3ef-3870-4ae4-ab42-ec91b6ebe451","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9c409565-25e9-44dc-aa5b-5248dcea99ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4dc7dc41-0433-41f5-93ca-27d0c61be626","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"807910d9-0e98-4884-b011-aaad0904b77c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-475938\" primary control-plane node in \"insufficient-storage-475938\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2a41219d-8e28-4ef8-a53f-f3dcf6f7fa01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"eceab7bb-acef-43ab-91bd-8b7fc55d563a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"366eb4d9-57d2-4f59-8779-9c6a05743bd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-475938 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-475938 --output=json --layout=cluster: exit status 7 (304.311959ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-475938","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-475938","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 11:09:29.158095  143621 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-475938" does not appear in /home/jenkins/minikube-integration/21657-2306/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-475938 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-475938 --output=json --layout=cluster: exit status 7 (292.92976ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-475938","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-475938","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 11:09:29.452539  143681 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-475938" does not appear in /home/jenkins/minikube-integration/21657-2306/kubeconfig
	E0929 11:09:29.463114  143681 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/insufficient-storage-475938/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-475938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-475938
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-475938: (1.899704264s)
--- PASS: TestInsufficientStorage (10.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.21s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3247622046 start -p running-upgrade-682363 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3247622046 start -p running-upgrade-682363 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.242174636s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-682363 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-682363 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.007979617s)
helpers_test.go:175: Cleaning up "running-upgrade-682363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-682363
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-682363: (2.289375016s)
--- PASS: TestRunningBinaryUpgrade (67.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (126.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-911454 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-911454 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.129259925s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-911454
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-911454: (1.351665s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-911454 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-911454 status --format={{.Host}}: exit status 7 (118.584887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-911454 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-911454 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.586231104s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-911454 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-911454 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-911454 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (158.950838ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-911454] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-911454
	    minikube start -p kubernetes-upgrade-911454 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9114542 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-911454 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-911454 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-911454 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.995219114s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-911454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-911454
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-911454: (2.389162356s)
--- PASS: TestKubernetesUpgrade (126.92s)

                                                
                                    
x
+
TestMissingContainerUpgrade (123.29s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3624971266 start -p missing-upgrade-280018 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3624971266 start -p missing-upgrade-280018 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.321970553s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-280018
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-280018: (1.161420536s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-280018
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-280018 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-280018 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.110855055s)
helpers_test.go:175: Cleaning up "missing-upgrade-280018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-280018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-280018: (2.898372757s)
--- PASS: TestMissingContainerUpgrade (123.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476754 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-476754 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (89.482388ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-476754] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476754 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476754 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.826364792s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-476754 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (17.068276927s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-476754 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-476754 status -o json: exit status 2 (560.210676ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-476754","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-476754
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-476754: (2.355652593s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0929 11:10:42.963202    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.705558014s)
--- PASS: TestNoKubernetes/serial/Start (6.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-476754 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-476754 "sudo systemctl is-active --quiet service kubelet": exit status 1 (257.739745ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-476754
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-476754: (1.19881742s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476754 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476754 --driver=docker  --container-runtime=crio: (7.303716776s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-476754 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-476754 "sudo systemctl is-active --quiet service kubelet": exit status 1 (276.378728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (65.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1196571607 start -p stopped-upgrade-877909 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1196571607 start -p stopped-upgrade-877909 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.342592746s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1196571607 -p stopped-upgrade-877909 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1196571607 -p stopped-upgrade-877909 stop: (1.340084564s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-877909 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-877909 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.372860675s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (65.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-877909
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-877909: (1.201259446s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestPause/serial/Start (90.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-472571 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0929 11:13:10.545604    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-472571 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m30.560865135s)
--- PASS: TestPause/serial/Start (90.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-163439 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-163439 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (196.696668ms)

                                                
                                                
-- stdout --
	* [false-163439] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:14:26.736190  176134 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:14:26.736632  176134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:14:26.736670  176134 out.go:374] Setting ErrFile to fd 2...
	I0929 11:14:26.736688  176134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:14:26.737007  176134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-2306/.minikube/bin
	I0929 11:14:26.737482  176134 out.go:368] Setting JSON to false
	I0929 11:14:26.738428  176134 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3416,"bootTime":1759141051,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0929 11:14:26.738563  176134 start.go:140] virtualization:  
	I0929 11:14:26.742270  176134 out.go:179] * [false-163439] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 11:14:26.746114  176134 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 11:14:26.746312  176134 notify.go:220] Checking for updates...
	I0929 11:14:26.749819  176134 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:14:26.753141  176134 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-2306/kubeconfig
	I0929 11:14:26.756326  176134 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-2306/.minikube
	I0929 11:14:26.759292  176134 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 11:14:26.762272  176134 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:14:26.765713  176134 config.go:182] Loaded profile config "pause-472571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:14:26.765822  176134 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:14:26.791226  176134 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 11:14:26.791392  176134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:14:26.862341  176134 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 11:14:26.847536764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:14:26.862455  176134 docker.go:318] overlay module found
	I0929 11:14:26.865453  176134 out.go:179] * Using the docker driver based on user configuration
	I0929 11:14:26.868206  176134 start.go:304] selected driver: docker
	I0929 11:14:26.868225  176134 start.go:924] validating driver "docker" against <nil>
	I0929 11:14:26.868238  176134 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:14:26.871685  176134 out.go:203] 
	W0929 11:14:26.874511  176134 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0929 11:14:26.877375  176134 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-163439 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-163439" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-163439" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21657-2306/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:13:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-472571
contexts:
- context:
cluster: pause-472571
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:13:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-472571
name: pause-472571
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-472571
user:
client-certificate: /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/pause-472571/client.crt
client-key: /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/pause-472571/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-163439

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163439"

                                                
                                                
----------------------- debugLogs end: false-163439 [took: 3.343628563s] --------------------------------
helpers_test.go:175: Cleaning up "false-163439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-163439
--- PASS: TestNetworkPlugins/group/false (3.71s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.72s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-472571 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-472571 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.659238989s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.72s)

                                                
                                    
x
+
TestPause/serial/Pause (1.16s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-472571 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-472571 --alsologtostderr -v=5: (1.157199505s)
--- PASS: TestPause/serial/Pause (1.16s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-472571 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-472571 --output=json --layout=cluster: exit status 2 (483.613811ms)

                                                
                                                
-- stdout --
	{"Name":"pause-472571","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-472571","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.48s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-472571 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-472571 --alsologtostderr -v=5: (1.034346867s)
--- PASS: TestPause/serial/Unpause (1.03s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.42s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-472571 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-472571 --alsologtostderr -v=5: (1.424640501s)
--- PASS: TestPause/serial/PauseAgain (1.42s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-472571 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-472571 --alsologtostderr -v=5: (3.230136027s)
--- PASS: TestPause/serial/DeletePaused (3.23s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-472571
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-472571: exit status 1 (20.21616ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-472571: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-543900 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-543900 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.173585091s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-543900 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fc38e7e1-6b46-4c94-bb4d-7920619ac3e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fc38e7e1-6b46-4c94-bb4d-7920619ac3e8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004031015s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-543900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-543900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-543900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.061083619s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-543900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-543900 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-543900 --alsologtostderr -v=3: (11.924253425s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-543900 -n old-k8s-version-543900
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-543900 -n old-k8s-version-543900: exit status 7 (70.545644ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-543900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (57.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-543900 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0929 11:17:53.619995    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:18:10.546065    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-543900 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (57.502664944s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-543900 -n old-k8s-version-543900
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (57.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xk57r" [699fef1a-07cf-4d4f-8012-50269a7e4999] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003822095s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xk57r" [699fef1a-07cf-4d4f-8012-50269a7e4999] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003355879s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-543900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-543900 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-543900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-543900 -n old-k8s-version-543900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-543900 -n old-k8s-version-543900: exit status 2 (311.596544ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-543900 -n old-k8s-version-543900
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-543900 -n old-k8s-version-543900: exit status 2 (338.925361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-543900 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-543900 -n old-k8s-version-543900
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-543900 -n old-k8s-version-543900
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-373816 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0929 11:18:46.029201    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-373816 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m15.720961774s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-408175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-408175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m25.339199647s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-373816 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7372ac8e-14e1-4e0a-90bf-410ab1ea3919] Pending
helpers_test.go:352: "busybox" [7372ac8e-14e1-4e0a-90bf-410ab1ea3919] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7372ac8e-14e1-4e0a-90bf-410ab1ea3919] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004135941s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-373816 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-373816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-373816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.041182637s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-373816 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-373816 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-373816 --alsologtostderr -v=3: (11.999975128s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-373816 -n no-preload-373816
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-373816 -n no-preload-373816: exit status 7 (71.593555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-373816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-373816 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0929 11:20:42.963934    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-373816 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (55.635074614s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-373816 -n no-preload-373816
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-408175 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b98b8d09-a4c8-46f7-9eb8-b20751f5b830] Pending
helpers_test.go:352: "busybox" [b98b8d09-a4c8-46f7-9eb8-b20751f5b830] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b98b8d09-a4c8-46f7-9eb8-b20751f5b830] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003972094s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-408175 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-408175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-408175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.399033595s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-408175 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-408175 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-408175 --alsologtostderr -v=3: (11.925814413s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-408175 -n embed-certs-408175
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-408175 -n embed-certs-408175: exit status 7 (73.605203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-408175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-408175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-408175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (54.269575472s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-408175 -n embed-certs-408175
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4d2rg" [13191ddc-de5b-4a97-a975-c6a971dab14a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00288794s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4d2rg" [13191ddc-de5b-4a97-a975-c6a971dab14a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003468444s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-373816 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-373816 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-373816 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-373816 -n no-preload-373816
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-373816 -n no-preload-373816: exit status 2 (417.310108ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-373816 -n no-preload-373816
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-373816 -n no-preload-373816: exit status 2 (464.683929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-373816 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-373816 --alsologtostderr -v=1: (1.014879473s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-373816 -n no-preload-373816
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-373816 -n no-preload-373816
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-563705 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0929 11:22:03.949536    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:22:03.955803    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:22:03.967066    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:22:03.988333    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:22:04.029680    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:22:04.111088    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:22:04.272465    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:22:04.594121    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:22:05.236331    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:22:06.518491    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-563705 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m25.000815315s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-44mlp" [9005acb2-875a-4fad-af27-b9c25394420b] Running
E0929 11:22:09.080039    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004219137s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-44mlp" [9005acb2-875a-4fad-af27-b9c25394420b] Running
E0929 11:22:14.201392    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004273724s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-408175 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-408175 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-408175 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-408175 --alsologtostderr -v=1: (1.139632555s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-408175 -n embed-certs-408175
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-408175 -n embed-certs-408175: exit status 2 (371.393683ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-408175 -n embed-certs-408175
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-408175 -n embed-certs-408175: exit status 2 (330.106494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-408175 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-408175 -n embed-certs-408175
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-408175 -n embed-certs-408175
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-491558 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0929 11:22:44.924065    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-491558 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (33.149943967s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-491558 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-491558 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.031545239s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-491558 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-491558 --alsologtostderr -v=3: (1.553169455s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-491558 -n newest-cni-491558
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-491558 -n newest-cni-491558: exit status 7 (72.279227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-491558 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-491558 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-491558 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (15.047632471s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-491558 -n newest-cni-491558
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-563705 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f8612180-3962-456f-99b7-e58ee5096a04] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f8612180-3962-456f-99b7-e58ee5096a04] Running
E0929 11:23:10.545302    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003999383s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-563705 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-563705 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-563705 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.517770648s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-563705 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-563705 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-563705 --alsologtostderr -v=3: (12.324515447s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-491558 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-491558 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-491558 -n newest-cni-491558
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-491558 -n newest-cni-491558: exit status 2 (307.923268ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-491558 -n newest-cni-491558
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-491558 -n newest-cni-491558: exit status 2 (308.179515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-491558 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-491558 -n newest-cni-491558
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-491558 -n newest-cni-491558
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0929 11:23:25.885716    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.078559344s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-563705 -n default-k8s-diff-port-563705
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-563705 -n default-k8s-diff-port-563705: exit status 7 (229.482472ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-563705 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-563705 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-563705 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m3.445743354s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-563705 -n default-k8s-diff-port-563705
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l9jwm" [55ae4503-943a-48be-9199-1ebc7a9c6ff7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004436762s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l9jwm" [55ae4503-943a-48be-9199-1ebc7a9c6ff7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002956675s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-563705 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-563705 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-563705 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-563705 --alsologtostderr -v=1: (1.08494227s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-563705 -n default-k8s-diff-port-563705
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-563705 -n default-k8s-diff-port-563705: exit status 2 (388.693907ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-563705 -n default-k8s-diff-port-563705
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-563705 -n default-k8s-diff-port-563705: exit status 2 (355.007508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-563705 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-563705 -n default-k8s-diff-port-563705
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-563705 -n default-k8s-diff-port-563705
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.60s)
E0929 11:33:59.947666    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:14.815624    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:33.622327    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:42.153577    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:42.160038    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:42.171520    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:42.193088    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:42.234616    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:42.316140    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:42.477775    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:42.799564    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:43.441878    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:44.723308    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:47.285531    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:48.559208    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:52.407773    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:59.067938    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:02.649151    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:16.263517    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:23.130540    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:26.030739    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:36.737735    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:42.963574    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:04.092820    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:10.559151    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:10.565544    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:10.576932    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:10.598402    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:10.639811    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:10.721307    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:10.882923    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:11.204362    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:11.846349    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:13.128186    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:15.690018    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:16.085885    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:20.812061    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:31.054135    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:43.789173    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:51.535802    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:37:03.949603    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:37:26.014924    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:37:32.497830    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:37:52.878861    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:04.120107    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:04.126507    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:04.137876    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:04.159323    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:04.200715    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:04.282213    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:04.443696    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:04.577168    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/default-k8s-diff-port-563705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:04.765665    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:05.407302    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:06.688911    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:09.251077    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:10.545936    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:14.373096    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:20.579811    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:24.614535    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:27.011185    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:45.103298    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:54.420028    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:39:26.065784    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/bridge-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:39:42.155346    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:39:48.559288    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:39:59.067301    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:40:09.856549    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/enable-default-cni-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:40:42.963758    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-163439 "pgrep -a kubelet"
E0929 11:24:47.806994    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-163439 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kqcdk" [9c4dc2e5-2ba0-40d2-8b47-24b2e3b6f464] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kqcdk" [9c4dc2e5-2ba0-40d2-8b47-24b2e3b6f464] Running
E0929 11:24:59.067838    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:59.076370    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:59.088691    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:59.110999    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:59.152923    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:59.235057    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:59.396724    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:59.719395    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:25:00.362384    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.012764956s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m24.201315068s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-163439 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-d7bhc" [6853849e-1ec2-4648-9e51-137b721e5e18] Running
E0929 11:26:21.018999    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003309067s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-163439 "pgrep -a kubelet"
I0929 11:26:22.387720    4108 config.go:182] Loaded profile config "kindnet-163439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-163439 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-85f4g" [13070701-ba3e-49d5-8a87-1e10016025ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-85f4g" [13070701-ba3e-49d5-8a87-1e10016025ec] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003407372s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-163439 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0929 11:27:03.948886    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:27:31.649183    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:27:42.940842    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (57.055583246s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-163439 "pgrep -a kubelet"
I0929 11:27:52.621084    4108 config.go:182] Loaded profile config "custom-flannel-163439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-163439 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v7q55" [c2243809-7883-4fc0-8816-62079dfae26f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-v7q55" [c2243809-7883-4fc0-8816-62079dfae26f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003895737s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-163439 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0929 11:28:45.556253    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/default-k8s-diff-port-563705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:26.518255    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/default-k8s-diff-port-563705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m15.698903511s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-163439 "pgrep -a kubelet"
I0929 11:29:41.887229    4108 config.go:182] Loaded profile config "enable-default-cni-163439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-163439 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xp4gj" [bd6e6d1d-2431-4be1-9e73-24a885451b20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xp4gj" [bd6e6d1d-2431-4be1-9e73-24a885451b20] Running
E0929 11:29:48.559267    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:48.565645    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:48.577096    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:48.598529    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:48.639909    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:48.721348    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:48.882666    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:49.204379    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:49.846368    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:51.128388    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00351755s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-163439 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0929 11:30:26.783407    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/no-preload-373816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:29.537397    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:42.963961    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/functional-599498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:48.440289    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/default-k8s-diff-port-563705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:31:10.499403    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.897104971s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nm68b" [c67e94e2-9569-42be-939e-03f3dee752e5] Running
E0929 11:31:16.086062    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:31:16.092510    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:31:16.104613    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:31:16.125998    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:31:16.167461    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:31:16.248935    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:31:16.410459    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00383931s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-163439 "pgrep -a kubelet"
E0929 11:31:16.733374    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0929 11:31:16.854582    4108 config.go:182] Loaded profile config "flannel-163439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-163439 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pdz4m" [49165a6f-9b58-471c-9840-891ec7a9e8e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 11:31:17.374738    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:31:18.656395    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:31:21.218265    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-pdz4m" [49165a6f-9b58-471c-9840-891ec7a9e8e4] Running
E0929 11:31:26.340586    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003677522s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-163439 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0929 11:31:57.064167    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:03.949241    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/old-k8s-version-543900/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:32.421244    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/auto-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:38.025480    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/kindnet-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:52.878895    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:52.885296    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:52.896726    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:52.918093    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:52.959509    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:53.041065    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:53.202539    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:53.524233    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:54.165747    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:55.447284    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:58.009414    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:33:03.130960    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-163439 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m14.201310928s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-163439 "pgrep -a kubelet"
I0929 11:33:03.841363    4108 config.go:182] Loaded profile config "bridge-163439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-163439 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v2lrx" [322bbd4a-048b-4f2a-ad9b-6fecaee6f771] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 11:33:04.576632    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/default-k8s-diff-port-563705/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-v2lrx" [322bbd4a-048b-4f2a-ad9b-6fecaee6f771] Running
E0929 11:33:10.545748    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/addons-718460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:33:13.372900    4108 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/custom-flannel-163439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003196244s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-163439 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-163439 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    

Test skip (32/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.84s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-283576 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-283576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-283576
--- SKIP: TestDownloadOnlyKic (0.84s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718460 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-287131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-287131
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-163439 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-163439" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-163439" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21657-2306/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:13:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-472571
contexts:
- context:
cluster: pause-472571
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:13:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-472571
name: pause-472571
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-472571
user:
client-certificate: /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/pause-472571/client.crt
client-key: /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/pause-472571/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-163439

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163439"

                                                
                                                
----------------------- debugLogs end: kubenet-163439 [took: 3.311467826s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-163439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-163439
--- SKIP: TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-163439 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-163439" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21657-2306/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:13:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-472571
contexts:
- context:
cluster: pause-472571
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:13:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-472571
name: pause-472571
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-472571
user:
client-certificate: /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/pause-472571/client.crt
client-key: /home/jenkins/minikube-integration/21657-2306/.minikube/profiles/pause-472571/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-163439

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-163439" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163439"

                                                
                                                
----------------------- debugLogs end: cilium-163439 [took: 4.290459718s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-163439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-163439
--- SKIP: TestNetworkPlugins/group/cilium (4.48s)

                                                
                                    
Copied to clipboard