Test Report: Docker_Linux_crio_arm64 21594

                    
                      532dacb4acf31553658ff6b0bf62fcf9309f2277:2025-09-19:41507
                    
                

Test fail (7/332)

x
+
TestAddons/serial/GCPAuth/FakeCredentials (12.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-497709 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-497709 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e85bfec0-0a6a-4c92-ba0a-d392f9c7972e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e85bfec0-0a6a-4c92-ba0a-d392f9c7972e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 12.003421966s
addons_test.go:694: (dbg) Run:  kubectl --context addons-497709 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:694: (dbg) Non-zero exit: kubectl --context addons-497709 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS": exit status 1 (202.953384ms)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
addons_test.go:696: printenv creds: exit status 1
--- FAIL: TestAddons/serial/GCPAuth/FakeCredentials (12.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (153.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-497709 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-497709 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-497709 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [0a83a774-fb4b-4f47-8d6d-d0a095f476cc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [0a83a774-fb4b-4f47-8d6d-d0a095f476cc] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003326819s
I0919 22:18:44.244551    4161 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-497709 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.45026651s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-497709 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-497709
helpers_test.go:243: (dbg) docker inspect addons-497709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9170066012a16d2cf897db78fc7a050699f6e3637af8a845a40af01faa96e1e",
	        "Created": "2025-09-19T22:15:02.515851569Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:15:02.562477889Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/c9170066012a16d2cf897db78fc7a050699f6e3637af8a845a40af01faa96e1e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9170066012a16d2cf897db78fc7a050699f6e3637af8a845a40af01faa96e1e/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9170066012a16d2cf897db78fc7a050699f6e3637af8a845a40af01faa96e1e/hosts",
	        "LogPath": "/var/lib/docker/containers/c9170066012a16d2cf897db78fc7a050699f6e3637af8a845a40af01faa96e1e/c9170066012a16d2cf897db78fc7a050699f6e3637af8a845a40af01faa96e1e-json.log",
	        "Name": "/addons-497709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-497709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-497709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9170066012a16d2cf897db78fc7a050699f6e3637af8a845a40af01faa96e1e",
	                "LowerDir": "/var/lib/docker/overlay2/1e2c6a0d15aaaa8d6a7c6b69e0d64443baffb06beeb24fd48d39bd612e59ef7b-init/diff:/var/lib/docker/overlay2/7a5d5014689cfdaab77901928a3123965a103b6cffc2baf102de2c2f246b4108/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1e2c6a0d15aaaa8d6a7c6b69e0d64443baffb06beeb24fd48d39bd612e59ef7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1e2c6a0d15aaaa8d6a7c6b69e0d64443baffb06beeb24fd48d39bd612e59ef7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1e2c6a0d15aaaa8d6a7c6b69e0d64443baffb06beeb24fd48d39bd612e59ef7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-497709",
	                "Source": "/var/lib/docker/volumes/addons-497709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-497709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-497709",
	                "name.minikube.sigs.k8s.io": "addons-497709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b480856c5c3a7ac18498d25f707568dcbe0679d5316802afd479055a8c8d1fb",
	            "SandboxKey": "/var/run/docker/netns/8b480856c5c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-497709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2a:b4:22:bd:5c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8eefeed2e90c6dbd1d7a461ced6864d4442c13dc55431d43b472bd439ca07f7c",
	                    "EndpointID": "d9b555a350d446b1fb10a62c78abe5a0d8d56e055d1aca0b3342604e34abc229",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-497709",
	                        "c9170066012a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-497709 -n addons-497709
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-497709 logs -n 25: (1.601769165s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-334793                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-334793 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ start   │ --download-only -p binary-mirror-823172 --alsologtostderr --binary-mirror http://127.0.0.1:33227 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-823172   │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ delete  │ -p binary-mirror-823172                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-823172   │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ addons  │ enable dashboard -p addons-497709                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ addons  │ disable dashboard -p addons-497709                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ start   │ -p addons-497709 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-497709 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-497709 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:18 UTC │
	│ addons  │ enable headlamp -p addons-497709 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:18 UTC │ 19 Sep 25 22:18 UTC │
	│ addons  │ addons-497709 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:18 UTC │ 19 Sep 25 22:18 UTC │
	│ ip      │ addons-497709 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:18 UTC │ 19 Sep 25 22:18 UTC │
	│ addons  │ addons-497709 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:18 UTC │ 19 Sep 25 22:18 UTC │
	│ addons  │ addons-497709 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:18 UTC │ 19 Sep 25 22:18 UTC │
	│ addons  │ addons-497709 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:18 UTC │ 19 Sep 25 22:18 UTC │
	│ ssh     │ addons-497709 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:18 UTC │                     │
	│ addons  │ addons-497709 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:19 UTC │ 19 Sep 25 22:19 UTC │
	│ addons  │ addons-497709 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:19 UTC │ 19 Sep 25 22:19 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-497709                                                                                                                                                                                                                                                                                                                                                                                           │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:19 UTC │ 19 Sep 25 22:19 UTC │
	│ addons  │ addons-497709 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:19 UTC │ 19 Sep 25 22:19 UTC │
	│ ssh     │ addons-497709 ssh cat /opt/local-path-provisioner/pvc-024e6e4e-85e0-4958-a7fa-ec7c318c7704_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:19 UTC │ 19 Sep 25 22:19 UTC │
	│ addons  │ addons-497709 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:19 UTC │ 19 Sep 25 22:20 UTC │
	│ addons  │ addons-497709 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:20 UTC │ 19 Sep 25 22:20 UTC │
	│ addons  │ addons-497709 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:20 UTC │ 19 Sep 25 22:20 UTC │
	│ addons  │ addons-497709 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:20 UTC │ 19 Sep 25 22:20 UTC │
	│ ip      │ addons-497709 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-497709          │ jenkins │ v1.37.0 │ 19 Sep 25 22:20 UTC │ 19 Sep 25 22:20 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:37.267075    4917 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:37.267249    4917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:37.267274    4917 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:37.267292    4917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:37.267582    4917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
	I0919 22:14:37.268080    4917 out.go:368] Setting JSON to false
	I0919 22:14:37.268896    4917 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3428,"bootTime":1758316649,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 22:14:37.268989    4917 start.go:140] virtualization:  
	I0919 22:14:37.270666    4917 out.go:179] * [addons-497709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0919 22:14:37.271877    4917 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:14:37.271943    4917 notify.go:220] Checking for updates...
	I0919 22:14:37.274318    4917 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:37.275583    4917 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	I0919 22:14:37.276796    4917 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	I0919 22:14:37.278052    4917 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 22:14:37.279158    4917 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:14:37.280385    4917 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:37.300684    4917 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0919 22:14:37.300811    4917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:37.366801    4917 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-19 22:14:37.3562155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 22:14:37.366912    4917 docker.go:318] overlay module found
	I0919 22:14:37.368274    4917 out.go:179] * Using the docker driver based on user configuration
	I0919 22:14:37.369505    4917 start.go:304] selected driver: docker
	I0919 22:14:37.369526    4917 start.go:918] validating driver "docker" against <nil>
	I0919 22:14:37.369566    4917 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:14:37.370309    4917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:37.422766    4917 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-19 22:14:37.414383391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 22:14:37.422917    4917 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:37.423140    4917 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:14:37.424392    4917 out.go:179] * Using Docker driver with root privileges
	I0919 22:14:37.425435    4917 cni.go:84] Creating CNI manager for ""
	I0919 22:14:37.425499    4917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 22:14:37.425510    4917 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:14:37.425583    4917 start.go:348] cluster config:
	{Name:addons-497709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-497709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0919 22:14:37.426990    4917 out.go:179] * Starting "addons-497709" primary control-plane node in "addons-497709" cluster
	I0919 22:14:37.427994    4917 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:14:37.429151    4917 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:14:37.430379    4917 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:14:37.430427    4917 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-2355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0919 22:14:37.430440    4917 cache.go:58] Caching tarball of preloaded images
	I0919 22:14:37.430453    4917 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:14:37.430532    4917 preload.go:172] Found /home/jenkins/minikube-integration/21594-2355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0919 22:14:37.430542    4917 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:14:37.430873    4917 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/config.json ...
	I0919 22:14:37.430907    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/config.json: {Name:mk0b4bba9b00afd2008f76b8f883adc50a1e5281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:37.445527    4917 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:37.445664    4917 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0919 22:14:37.445683    4917 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0919 22:14:37.445688    4917 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0919 22:14:37.445696    4917 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0919 22:14:37.445701    4917 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0919 22:14:55.331034    4917 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0919 22:14:55.331070    4917 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:14:55.331097    4917 start.go:360] acquireMachinesLock for addons-497709: {Name:mk5f7dcce39125718b24fb1e0053a5605e5e1be4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:14:55.331216    4917 start.go:364] duration metric: took 97.888µs to acquireMachinesLock for "addons-497709"
	I0919 22:14:55.331249    4917 start.go:93] Provisioning new machine with config: &{Name:addons-497709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-497709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:14:55.331327    4917 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:14:55.334766    4917 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0919 22:14:55.335008    4917 start.go:159] libmachine.API.Create for "addons-497709" (driver="docker")
	I0919 22:14:55.335041    4917 client.go:168] LocalClient.Create starting
	I0919 22:14:55.335157    4917 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21594-2355/.minikube/certs/ca.pem
	I0919 22:14:55.559289    4917 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21594-2355/.minikube/certs/cert.pem
	I0919 22:14:55.760725    4917 cli_runner.go:164] Run: docker network inspect addons-497709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:14:55.777401    4917 cli_runner.go:211] docker network inspect addons-497709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:14:55.777479    4917 network_create.go:284] running [docker network inspect addons-497709] to gather additional debugging logs...
	I0919 22:14:55.777513    4917 cli_runner.go:164] Run: docker network inspect addons-497709
	W0919 22:14:55.794357    4917 cli_runner.go:211] docker network inspect addons-497709 returned with exit code 1
	I0919 22:14:55.794387    4917 network_create.go:287] error running [docker network inspect addons-497709]: docker network inspect addons-497709: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-497709 not found
	I0919 22:14:55.794401    4917 network_create.go:289] output of [docker network inspect addons-497709]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-497709 not found
	
	** /stderr **
	I0919 22:14:55.794502    4917 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:14:55.812882    4917 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017bc7b0}
	I0919 22:14:55.812929    4917 network_create.go:124] attempt to create docker network addons-497709 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:14:55.812983    4917 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-497709 addons-497709
	I0919 22:14:55.868996    4917 network_create.go:108] docker network addons-497709 192.168.49.0/24 created
	I0919 22:14:55.869023    4917 kic.go:121] calculated static IP "192.168.49.2" for the "addons-497709" container
	I0919 22:14:55.869095    4917 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:14:55.884417    4917 cli_runner.go:164] Run: docker volume create addons-497709 --label name.minikube.sigs.k8s.io=addons-497709 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:14:55.904482    4917 oci.go:103] Successfully created a docker volume addons-497709
	I0919 22:14:55.904582    4917 cli_runner.go:164] Run: docker run --rm --name addons-497709-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-497709 --entrypoint /usr/bin/test -v addons-497709:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:14:58.051801    4917 cli_runner.go:217] Completed: docker run --rm --name addons-497709-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-497709 --entrypoint /usr/bin/test -v addons-497709:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (2.147172351s)
	I0919 22:14:58.051837    4917 oci.go:107] Successfully prepared a docker volume addons-497709
	I0919 22:14:58.051868    4917 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:14:58.051896    4917 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:14:58.051987    4917 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-2355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-497709:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:15:02.437559    4917 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-2355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-497709:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.385517349s)
	I0919 22:15:02.437592    4917 kic.go:203] duration metric: took 4.385700531s to extract preloaded images to volume ...
	W0919 22:15:02.437747    4917 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0919 22:15:02.437858    4917 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:15:02.500358    4917 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-497709 --name addons-497709 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-497709 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-497709 --network addons-497709 --ip 192.168.49.2 --volume addons-497709:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:15:02.818000    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Running}}
	I0919 22:15:02.848243    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:02.867896    4917 cli_runner.go:164] Run: docker exec addons-497709 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:15:02.923650    4917 oci.go:144] the created container "addons-497709" has a running status.
	I0919 22:15:02.923684    4917 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa...
	I0919 22:15:03.186118    4917 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:15:03.212145    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:03.233084    4917 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:15:03.233102    4917 kic_runner.go:114] Args: [docker exec --privileged addons-497709 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:15:03.292041    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:03.325371    4917 machine.go:93] provisionDockerMachine start ...
	I0919 22:15:03.326600    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:03.357199    4917 main.go:141] libmachine: Using SSH client type: native
	I0919 22:15:03.357521    4917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 22:15:03.357536    4917 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:15:03.358123    4917 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60362->127.0.0.1:32768: read: connection reset by peer
	I0919 22:15:06.497876    4917 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-497709
	
	I0919 22:15:06.497944    4917 ubuntu.go:182] provisioning hostname "addons-497709"
	I0919 22:15:06.498024    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:06.516261    4917 main.go:141] libmachine: Using SSH client type: native
	I0919 22:15:06.516584    4917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 22:15:06.516601    4917 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-497709 && echo "addons-497709" | sudo tee /etc/hostname
	I0919 22:15:06.666431    4917 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-497709
	
	I0919 22:15:06.666528    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:06.684485    4917 main.go:141] libmachine: Using SSH client type: native
	I0919 22:15:06.684785    4917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 22:15:06.684801    4917 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-497709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-497709/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-497709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:15:06.826320    4917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:15:06.826409    4917 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-2355/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-2355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-2355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-2355/.minikube}
	I0919 22:15:06.826461    4917 ubuntu.go:190] setting up certificates
	I0919 22:15:06.826494    4917 provision.go:84] configureAuth start
	I0919 22:15:06.826583    4917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-497709
	I0919 22:15:06.846432    4917 provision.go:143] copyHostCerts
	I0919 22:15:06.846512    4917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-2355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-2355/.minikube/ca.pem (1078 bytes)
	I0919 22:15:06.846634    4917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-2355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-2355/.minikube/cert.pem (1123 bytes)
	I0919 22:15:06.846694    4917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-2355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-2355/.minikube/key.pem (1675 bytes)
	I0919 22:15:06.846780    4917 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-2355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-2355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-2355/.minikube/certs/ca-key.pem org=jenkins.addons-497709 san=[127.0.0.1 192.168.49.2 addons-497709 localhost minikube]
	I0919 22:15:07.617717    4917 provision.go:177] copyRemoteCerts
	I0919 22:15:07.617783    4917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:15:07.617824    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:07.635068    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:07.735398    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:15:07.760142    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:15:07.784925    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:15:07.809318    4917 provision.go:87] duration metric: took 982.787914ms to configureAuth
	I0919 22:15:07.809345    4917 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:15:07.809530    4917 config.go:182] Loaded profile config "addons-497709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:15:07.809636    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:07.827538    4917 main.go:141] libmachine: Using SSH client type: native
	I0919 22:15:07.827843    4917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 22:15:07.827861    4917 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:15:08.072463    4917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:15:08.072489    4917 machine.go:96] duration metric: took 4.747099825s to provisionDockerMachine
	I0919 22:15:08.072503    4917 client.go:171] duration metric: took 12.737449068s to LocalClient.Create
	I0919 22:15:08.072534    4917 start.go:167] duration metric: took 12.737516318s to libmachine.API.Create "addons-497709"
	I0919 22:15:08.072549    4917 start.go:293] postStartSetup for "addons-497709" (driver="docker")
	I0919 22:15:08.072567    4917 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:15:08.072658    4917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:15:08.072712    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:08.092328    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:08.191256    4917 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:15:08.194232    4917 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:15:08.194282    4917 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:15:08.194293    4917 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:15:08.194301    4917 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:15:08.194314    4917 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-2355/.minikube/addons for local assets ...
	I0919 22:15:08.194385    4917 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-2355/.minikube/files for local assets ...
	I0919 22:15:08.194410    4917 start.go:296] duration metric: took 121.848213ms for postStartSetup
	I0919 22:15:08.194719    4917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-497709
	I0919 22:15:08.211174    4917 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/config.json ...
	I0919 22:15:08.211446    4917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:15:08.211493    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:08.227548    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:08.322882    4917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:15:08.327284    4917 start.go:128] duration metric: took 12.99594327s to createHost
	I0919 22:15:08.327308    4917 start.go:83] releasing machines lock for "addons-497709", held for 12.996076351s
	I0919 22:15:08.327382    4917 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-497709
	I0919 22:15:08.345472    4917 ssh_runner.go:195] Run: cat /version.json
	I0919 22:15:08.345526    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:08.345787    4917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:15:08.345848    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:08.364041    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:08.366105    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:08.457637    4917 ssh_runner.go:195] Run: systemctl --version
	I0919 22:15:08.585427    4917 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:15:08.735739    4917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:15:08.739934    4917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:15:08.762221    4917 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:15:08.762363    4917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:15:08.798105    4917 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:15:08.798127    4917 start.go:495] detecting cgroup driver to use...
	I0919 22:15:08.798160    4917 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 22:15:08.798212    4917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:15:08.815606    4917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:15:08.827375    4917 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:15:08.827463    4917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:15:08.842528    4917 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:15:08.858045    4917 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:15:08.944033    4917 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:15:09.039520    4917 docker.go:234] disabling docker service ...
	I0919 22:15:09.039629    4917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:15:09.059367    4917 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:15:09.071310    4917 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:15:09.150371    4917 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:15:09.261112    4917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:15:09.275680    4917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:15:09.294003    4917 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:15:09.294105    4917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:15:09.305186    4917 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 22:15:09.305256    4917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:15:09.316368    4917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:15:09.326902    4917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:15:09.336652    4917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:15:09.345831    4917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:15:09.355709    4917 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:15:09.370984    4917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:15:09.380957    4917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:15:09.389348    4917 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 22:15:09.389408    4917 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 22:15:09.402219    4917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:15:09.411466    4917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:15:09.508225    4917 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:15:09.614370    4917 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:15:09.614466    4917 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:15:09.617958    4917 start.go:563] Will wait 60s for crictl version
	I0919 22:15:09.618063    4917 ssh_runner.go:195] Run: which crictl
	I0919 22:15:09.621345    4917 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:15:09.658604    4917 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:15:09.658813    4917 ssh_runner.go:195] Run: crio --version
	I0919 22:15:09.702289    4917 ssh_runner.go:195] Run: crio --version
	I0919 22:15:09.740753    4917 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:15:09.741857    4917 cli_runner.go:164] Run: docker network inspect addons-497709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:15:09.758061    4917 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:15:09.761655    4917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:15:09.772584    4917 kubeadm.go:875] updating cluster {Name:addons-497709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-497709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:15:09.772699    4917 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:15:09.772759    4917 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:15:09.854433    4917 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:15:09.854453    4917 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:15:09.854506    4917 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:15:09.891964    4917 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:15:09.891987    4917 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:15:09.891996    4917 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:15:09.892080    4917 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-497709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-497709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:15:09.892164    4917 ssh_runner.go:195] Run: crio config
	I0919 22:15:09.940233    4917 cni.go:84] Creating CNI manager for ""
	I0919 22:15:09.940263    4917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 22:15:09.940277    4917 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:15:09.940377    4917 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-497709 NodeName:addons-497709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:15:09.940562    4917 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-497709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:15:09.940652    4917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:15:09.949673    4917 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:15:09.949741    4917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 22:15:09.958292    4917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:15:09.976283    4917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:15:09.994543    4917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0919 22:15:10.015279    4917 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 22:15:10.018881    4917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:15:10.030577    4917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:15:10.114813    4917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:15:10.128507    4917 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709 for IP: 192.168.49.2
	I0919 22:15:10.128538    4917 certs.go:194] generating shared ca certs ...
	I0919 22:15:10.128554    4917 certs.go:226] acquiring lock for ca certs: {Name:mk0205fdf5b9231bb3c38f0614d8978a7671f5ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:10.128689    4917 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-2355/.minikube/ca.key
	I0919 22:15:10.259114    4917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-2355/.minikube/ca.crt ...
	I0919 22:15:10.259146    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/.minikube/ca.crt: {Name:mk1a82d6deae7a618375e1e8683bc6a1bf1bddd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:10.259335    4917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-2355/.minikube/ca.key ...
	I0919 22:15:10.259347    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/.minikube/ca.key: {Name:mk177dbd68f385f82f7238e9de372a9c06e24762 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:10.259439    4917 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-2355/.minikube/proxy-client-ca.key
	I0919 22:15:10.521575    4917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-2355/.minikube/proxy-client-ca.crt ...
	I0919 22:15:10.521603    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/.minikube/proxy-client-ca.crt: {Name:mk55c79810622a7012520fdb41f9787dfd8e1eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:10.521773    4917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-2355/.minikube/proxy-client-ca.key ...
	I0919 22:15:10.521785    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/.minikube/proxy-client-ca.key: {Name:mkaebf33a17d63373b6857fa0f1651af8efa01d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:10.521850    4917 certs.go:256] generating profile certs ...
	I0919 22:15:10.521902    4917 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.key
	I0919 22:15:10.521922    4917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt with IP's: []
	I0919 22:15:11.581878    4917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt ...
	I0919 22:15:11.581916    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: {Name:mkc3822089480bee32c2f2d5be8ce64c2f384c76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:11.582093    4917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.key ...
	I0919 22:15:11.582106    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.key: {Name:mk10c4c26b29734e1287e6a71d3d89146ffb5232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:11.582185    4917 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.key.e19b359e
	I0919 22:15:11.582206    4917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.crt.e19b359e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 22:15:11.708058    4917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.crt.e19b359e ...
	I0919 22:15:11.708086    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.crt.e19b359e: {Name:mkda679f1bf737b485178ab2caa025bd3a7fe800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:11.708258    4917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.key.e19b359e ...
	I0919 22:15:11.708272    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.key.e19b359e: {Name:mk785539227eb81c1fe970c9542c7cfc652139d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:11.708353    4917 certs.go:381] copying /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.crt.e19b359e -> /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.crt
	I0919 22:15:11.708441    4917 certs.go:385] copying /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.key.e19b359e -> /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.key
	I0919 22:15:11.708497    4917 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/proxy-client.key
	I0919 22:15:11.708516    4917 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/proxy-client.crt with IP's: []
	I0919 22:15:12.979700    4917 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/proxy-client.crt ...
	I0919 22:15:12.979730    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/proxy-client.crt: {Name:mk9702a936827d1f71d06251455859feb6ce98e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:12.979911    4917 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/proxy-client.key ...
	I0919 22:15:12.979923    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/proxy-client.key: {Name:mk2ac22c7bbc7a4a7475d8ec3f7017d5760dc072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:12.980118    4917 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-2355/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:15:12.980155    4917 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-2355/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:15:12.980182    4917 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-2355/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:15:12.980203    4917 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-2355/.minikube/certs/key.pem (1675 bytes)
	I0919 22:15:12.980755    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:15:13.006880    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:15:13.034231    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:15:13.058040    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:15:13.084173    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 22:15:13.112043    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:15:13.139095    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:15:13.163114    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:15:13.186826    4917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-2355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:15:13.211197    4917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:15:13.228912    4917 ssh_runner.go:195] Run: openssl version
	I0919 22:15:13.234245    4917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:15:13.243473    4917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:15:13.246795    4917 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:15:13.246872    4917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:15:13.253630    4917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:15:13.262966    4917 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:15:13.266252    4917 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:15:13.266309    4917 kubeadm.go:392] StartCluster: {Name:addons-497709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-497709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:15:13.266389    4917 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:15:13.266451    4917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:15:13.306035    4917 cri.go:89] found id: ""
	I0919 22:15:13.306145    4917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:15:13.315136    4917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:15:13.323938    4917 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:15:13.323999    4917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:15:13.333260    4917 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:15:13.333279    4917 kubeadm.go:157] found existing configuration files:
	
	I0919 22:15:13.333329    4917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:15:13.342318    4917 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:15:13.342383    4917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:15:13.351048    4917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:15:13.359968    4917 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:15:13.360115    4917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:15:13.368507    4917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:15:13.377033    4917 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:15:13.377096    4917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:15:13.385613    4917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:15:13.394448    4917 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:15:13.394507    4917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:15:13.402647    4917 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:15:13.460055    4917 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0919 22:15:13.460328    4917 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0919 22:15:13.516948    4917 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:15:30.346185    4917 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:15:30.346248    4917 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:15:30.346378    4917 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:15:30.346440    4917 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0919 22:15:30.346478    4917 kubeadm.go:310] OS: Linux
	I0919 22:15:30.346526    4917 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:15:30.346577    4917 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0919 22:15:30.346628    4917 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:15:30.346679    4917 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:15:30.346730    4917 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:15:30.346781    4917 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:15:30.346828    4917 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:15:30.346879    4917 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:15:30.346927    4917 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0919 22:15:30.347004    4917 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:15:30.347103    4917 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:15:30.347198    4917 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:15:30.347263    4917 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:15:30.348666    4917 out.go:252]   - Generating certificates and keys ...
	I0919 22:15:30.348753    4917 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:15:30.348827    4917 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:15:30.348901    4917 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:15:30.348966    4917 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:15:30.349034    4917 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:15:30.349091    4917 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:15:30.349151    4917 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:15:30.349277    4917 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-497709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:15:30.349336    4917 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:15:30.349459    4917 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-497709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:15:30.349532    4917 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:15:30.349602    4917 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:15:30.349651    4917 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:15:30.349713    4917 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:15:30.349770    4917 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:15:30.349833    4917 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:15:30.349895    4917 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:15:30.349966    4917 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:15:30.350027    4917 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:15:30.350116    4917 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:15:30.350189    4917 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:15:30.351403    4917 out.go:252]   - Booting up control plane ...
	I0919 22:15:30.351507    4917 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:15:30.351616    4917 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:15:30.351703    4917 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:15:30.351823    4917 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:15:30.351941    4917 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:15:30.352076    4917 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:15:30.352182    4917 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:15:30.352232    4917 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:15:30.352384    4917 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:15:30.352513    4917 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:15:30.352592    4917 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001135456s
	I0919 22:15:30.352705    4917 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:15:30.352849    4917 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:15:30.352951    4917 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:15:30.353044    4917 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:15:30.353156    4917 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.039997135s
	I0919 22:15:30.353231    4917 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.247158425s
	I0919 22:15:30.353313    4917 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001260887s
	I0919 22:15:30.353439    4917 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:15:30.353592    4917 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:15:30.353660    4917 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:15:30.353900    4917 kubeadm.go:310] [mark-control-plane] Marking the node addons-497709 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:15:30.353988    4917 kubeadm.go:310] [bootstrap-token] Using token: d2mosv.aqzlmgt8x6wmxrsx
	I0919 22:15:30.356208    4917 out.go:252]   - Configuring RBAC rules ...
	I0919 22:15:30.356327    4917 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:15:30.356420    4917 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:15:30.356599    4917 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:15:30.356790    4917 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:15:30.356918    4917 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:15:30.357039    4917 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:15:30.357182    4917 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:15:30.357256    4917 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:15:30.357333    4917 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:15:30.357342    4917 kubeadm.go:310] 
	I0919 22:15:30.357412    4917 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:15:30.357422    4917 kubeadm.go:310] 
	I0919 22:15:30.357509    4917 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:15:30.357517    4917 kubeadm.go:310] 
	I0919 22:15:30.357544    4917 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:15:30.357608    4917 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:15:30.357670    4917 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:15:30.357684    4917 kubeadm.go:310] 
	I0919 22:15:30.357746    4917 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:15:30.357753    4917 kubeadm.go:310] 
	I0919 22:15:30.357807    4917 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:15:30.357811    4917 kubeadm.go:310] 
	I0919 22:15:30.357870    4917 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:15:30.357951    4917 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:15:30.358025    4917 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:15:30.358032    4917 kubeadm.go:310] 
	I0919 22:15:30.358130    4917 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:15:30.358216    4917 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:15:30.358225    4917 kubeadm.go:310] 
	I0919 22:15:30.358330    4917 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d2mosv.aqzlmgt8x6wmxrsx \
	I0919 22:15:30.358445    4917 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:07cd534ac46fcba11022c2bcb880ac410db02fa6bfc90b9448e4e75401a163f4 \
	I0919 22:15:30.358471    4917 kubeadm.go:310] 	--control-plane 
	I0919 22:15:30.358478    4917 kubeadm.go:310] 
	I0919 22:15:30.358599    4917 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:15:30.358616    4917 kubeadm.go:310] 
	I0919 22:15:30.358712    4917 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d2mosv.aqzlmgt8x6wmxrsx \
	I0919 22:15:30.358845    4917 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:07cd534ac46fcba11022c2bcb880ac410db02fa6bfc90b9448e4e75401a163f4 
	I0919 22:15:30.358859    4917 cni.go:84] Creating CNI manager for ""
	I0919 22:15:30.358868    4917 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 22:15:30.360156    4917 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:15:30.361302    4917 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:15:30.365137    4917 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:15:30.365157    4917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:15:30.384219    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:15:30.660976    4917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:15:30.661061    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-497709 minikube.k8s.io/updated_at=2025_09_19T22_15_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=addons-497709 minikube.k8s.io/primary=true
	I0919 22:15:30.661018    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:30.815566    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:30.815629    4917 ops.go:34] apiserver oom_adj: -16
	I0919 22:15:31.316196    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:31.815761    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:32.316429    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:32.816456    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:33.316233    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:33.815866    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:34.316245    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:34.815601    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:35.316607    4917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:35.409129    4917 kubeadm.go:1105] duration metric: took 4.748161253s to wait for elevateKubeSystemPrivileges
	I0919 22:15:35.409161    4917 kubeadm.go:394] duration metric: took 22.142855396s to StartCluster
	I0919 22:15:35.409179    4917 settings.go:142] acquiring lock: {Name:mk18f3c5c326d5a7c341649487d7c67080df4f5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:35.409308    4917 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-2355/kubeconfig
	I0919 22:15:35.409666    4917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-2355/kubeconfig: {Name:mk613534163d783a52d1af86833b2a47e0e4383b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:35.409860    4917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:15:35.409908    4917 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:15:35.410105    4917 config.go:182] Loaded profile config "addons-497709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:15:35.410138    4917 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 22:15:35.410226    4917 addons.go:69] Setting yakd=true in profile "addons-497709"
	I0919 22:15:35.410239    4917 addons.go:238] Setting addon yakd=true in "addons-497709"
	I0919 22:15:35.410283    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.410722    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.411056    4917 addons.go:69] Setting metrics-server=true in profile "addons-497709"
	I0919 22:15:35.411083    4917 addons.go:238] Setting addon metrics-server=true in "addons-497709"
	I0919 22:15:35.411116    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.411524    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.411669    4917 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-497709"
	I0919 22:15:35.411689    4917 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-497709"
	I0919 22:15:35.411712    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.412093    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.412539    4917 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-497709"
	I0919 22:15:35.412563    4917 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-497709"
	I0919 22:15:35.412631    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.412636    4917 addons.go:69] Setting registry=true in profile "addons-497709"
	I0919 22:15:35.412662    4917 addons.go:238] Setting addon registry=true in "addons-497709"
	I0919 22:15:35.412684    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.413050    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.413064    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.417547    4917 addons.go:69] Setting cloud-spanner=true in profile "addons-497709"
	I0919 22:15:35.417573    4917 addons.go:238] Setting addon cloud-spanner=true in "addons-497709"
	I0919 22:15:35.417613    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.418228    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.421106    4917 addons.go:69] Setting registry-creds=true in profile "addons-497709"
	I0919 22:15:35.421138    4917 addons.go:238] Setting addon registry-creds=true in "addons-497709"
	I0919 22:15:35.421174    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.421621    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.428173    4917 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-497709"
	I0919 22:15:35.428236    4917 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-497709"
	I0919 22:15:35.428271    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.428753    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.438376    4917 addons.go:69] Setting storage-provisioner=true in profile "addons-497709"
	I0919 22:15:35.438457    4917 addons.go:238] Setting addon storage-provisioner=true in "addons-497709"
	I0919 22:15:35.438506    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.441417    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.453245    4917 addons.go:69] Setting default-storageclass=true in profile "addons-497709"
	I0919 22:15:35.453339    4917 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-497709"
	I0919 22:15:35.453732    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.471902    4917 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-497709"
	I0919 22:15:35.472052    4917 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-497709"
	I0919 22:15:35.479260    4917 addons.go:69] Setting volcano=true in profile "addons-497709"
	I0919 22:15:35.479302    4917 addons.go:238] Setting addon volcano=true in "addons-497709"
	I0919 22:15:35.479333    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.479828    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.481791    4917 addons.go:69] Setting gcp-auth=true in profile "addons-497709"
	I0919 22:15:35.481822    4917 mustload.go:65] Loading cluster: addons-497709
	I0919 22:15:35.482004    4917 config.go:182] Loaded profile config "addons-497709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:15:35.482230    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.498364    4917 addons.go:69] Setting volumesnapshots=true in profile "addons-497709"
	I0919 22:15:35.498394    4917 addons.go:238] Setting addon volumesnapshots=true in "addons-497709"
	I0919 22:15:35.498429    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.498988    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.499777    4917 addons.go:69] Setting ingress=true in profile "addons-497709"
	I0919 22:15:35.499836    4917 addons.go:238] Setting addon ingress=true in "addons-497709"
	I0919 22:15:35.499891    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.500330    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.514727    4917 addons.go:69] Setting ingress-dns=true in profile "addons-497709"
	I0919 22:15:35.514766    4917 addons.go:238] Setting addon ingress-dns=true in "addons-497709"
	I0919 22:15:35.514809    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.515274    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.516966    4917 addons.go:69] Setting inspektor-gadget=true in profile "addons-497709"
	I0919 22:15:35.516989    4917 addons.go:238] Setting addon inspektor-gadget=true in "addons-497709"
	I0919 22:15:35.517145    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.518648    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.536024    4917 out.go:179] * Verifying Kubernetes components...
	I0919 22:15:35.537349    4917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:15:35.538435    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.572438    4917 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0919 22:15:35.572822    4917 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 22:15:35.596024    4917 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0919 22:15:35.603795    4917 addons.go:238] Setting addon default-storageclass=true in "addons-497709"
	I0919 22:15:35.603840    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.604262    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.614845    4917 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0919 22:15:35.616214    4917 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0919 22:15:35.616273    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 22:15:35.616347    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.622883    4917 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 22:15:35.622909    4917 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 22:15:35.622979    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.632628    4917 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0919 22:15:35.633753    4917 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0919 22:15:35.633824    4917 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0919 22:15:35.634018    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0919 22:15:35.634091    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.635129    4917 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 22:15:35.635185    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 22:15:35.635837    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.653731    4917 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0919 22:15:35.656951    4917 out.go:179]   - Using image docker.io/registry:3.0.0
	I0919 22:15:35.657094    4917 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 22:15:35.657153    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0919 22:15:35.657279    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.660935    4917 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0919 22:15:35.664632    4917 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0919 22:15:35.664655    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0919 22:15:35.664741    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.677076    4917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 22:15:35.677290    4917 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:15:35.680293    4917 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 22:15:35.681097    4917 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:15:35.681158    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:15:35.681249    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.682662    4917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0919 22:15:35.683553    4917 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 22:15:35.686598    4917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 22:15:35.687433    4917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0919 22:15:35.692095    4917 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 22:15:35.692117    4917 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 22:15:35.692191    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.734178    4917 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 22:15:35.734199    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 22:15:35.734370    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.735384    4917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	W0919 22:15:35.754863    4917 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0919 22:15:35.766373    4917 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 22:15:35.769145    4917 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 22:15:35.769169    4917 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 22:15:35.769239    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.788170    4917 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0919 22:15:35.788395    4917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0919 22:15:35.789630    4917 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-497709"
	I0919 22:15:35.789666    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.790058    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:35.795877    4917 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:15:35.795895    4917 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:15:35.795957    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.846633    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:35.848124    4917 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 22:15:35.848138    4917 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0919 22:15:35.848191    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.877698    4917 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 22:15:35.877719    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 22:15:35.877780    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.881073    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:35.884592    4917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 22:15:35.888220    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:35.894182    4917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 22:15:35.897226    4917 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 22:15:35.904194    4917 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 22:15:35.904217    4917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 22:15:35.904280    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:35.929384    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:35.934407    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:35.935162    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:35.949198    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:35.954564    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:35.955200    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:35.971969    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:35.992222    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:36.014879    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:36.029543    4917 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 22:15:36.033964    4917 out.go:179]   - Using image docker.io/busybox:stable
	I0919 22:15:36.034799    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:36.036947    4917 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 22:15:36.036968    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 22:15:36.037029    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:36.045742    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:36.048225    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	W0919 22:15:36.049882    4917 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 22:15:36.049925    4917 retry.go:31] will retry after 196.214173ms: ssh: handshake failed: EOF
	I0919 22:15:36.074299    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	W0919 22:15:36.075531    4917 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 22:15:36.075551    4917 retry.go:31] will retry after 338.401966ms: ssh: handshake failed: EOF
	I0919 22:15:36.189591    4917 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 22:15:36.189664    4917 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 22:15:36.318786    4917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:15:36.319044    4917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:15:36.350666    4917 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 22:15:36.350687    4917 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 22:15:36.360694    4917 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 22:15:36.360713    4917 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 22:15:36.391477    4917 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 22:15:36.391551    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 22:15:36.424560    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 22:15:36.442807    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:15:36.495748    4917 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 22:15:36.495822    4917 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 22:15:36.504796    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 22:15:36.510251    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0919 22:15:36.516500    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 22:15:36.525936    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0919 22:15:36.526225    4917 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 22:15:36.526278    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 22:15:36.529756    4917 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 22:15:36.529811    4917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 22:15:36.584667    4917 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 22:15:36.584693    4917 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 22:15:36.587825    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 22:15:36.594470    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:15:36.608818    4917 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:36.608887    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0919 22:15:36.737893    4917 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 22:15:36.737964    4917 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 22:15:36.792277    4917 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 22:15:36.792298    4917 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 22:15:36.795415    4917 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 22:15:36.795483    4917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 22:15:36.841249    4917 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 22:15:36.841318    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 22:15:36.843949    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:36.857370    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 22:15:36.980630    4917 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 22:15:36.980704    4917 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 22:15:37.008528    4917 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 22:15:37.008604    4917 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 22:15:37.057236    4917 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 22:15:37.057306    4917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 22:15:37.100641    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 22:15:37.114769    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 22:15:37.246852    4917 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 22:15:37.246925    4917 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 22:15:37.264421    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 22:15:37.294634    4917 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 22:15:37.294703    4917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 22:15:37.388289    4917 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 22:15:37.388359    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 22:15:37.448266    4917 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 22:15:37.448338    4917 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 22:15:37.510328    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 22:15:37.526766    4917 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 22:15:37.526834    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 22:15:37.646762    4917 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 22:15:37.646837    4917 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 22:15:37.725025    4917 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 22:15:37.725093    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 22:15:37.943106    4917 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 22:15:37.943176    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 22:15:38.150437    4917 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 22:15:38.150509    4917 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 22:15:38.324867    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 22:15:39.358397    4917 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.03930549s)
	I0919 22:15:39.358476    4917 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:15:39.359117    4917 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.040258765s)
	I0919 22:15:39.359843    4917 node_ready.go:35] waiting up to 6m0s for node "addons-497709" to be "Ready" ...
	I0919 22:15:39.513759    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.089120617s)
	I0919 22:15:39.951957    4917 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-497709" context rescaled to 1 replicas
	I0919 22:15:40.378915    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.936038275s)
	I0919 22:15:40.379029    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.874172894s)
	I0919 22:15:40.379099    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.86876416s)
	I0919 22:15:40.379176    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.862603844s)
	I0919 22:15:40.379193    4917 addons.go:479] Verifying addon registry=true in "addons-497709"
	I0919 22:15:40.379255    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.853252454s)
	I0919 22:15:40.379306    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.791415406s)
	I0919 22:15:40.379337    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.784808314s)
	I0919 22:15:40.384333    4917 out.go:179] * Verifying registry addon...
	I0919 22:15:40.387914    4917 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 22:15:40.422288    4917 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 22:15:40.422310    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:40.451986    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.607960237s)
	W0919 22:15:40.452018    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:40.452097    4917 retry.go:31] will retry after 207.585163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:40.660193    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:40.971680    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0919 22:15:41.390056    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:15:41.410933    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:41.592513    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.735054135s)
	I0919 22:15:41.592547    4917 addons.go:479] Verifying addon ingress=true in "addons-497709"
	I0919 22:15:41.592660    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.491933803s)
	I0919 22:15:41.592890    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.4780504s)
	I0919 22:15:41.593205    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.328710679s)
	I0919 22:15:41.593234    4917 addons.go:479] Verifying addon metrics-server=true in "addons-497709"
	I0919 22:15:41.593325    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.082923243s)
	W0919 22:15:41.593353    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 22:15:41.593368    4917 retry.go:31] will retry after 358.070502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 22:15:41.596161    4917 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-497709 service yakd-dashboard -n yakd-dashboard
	
	I0919 22:15:41.596329    4917 out.go:179] * Verifying ingress addon...
	I0919 22:15:41.600781    4917 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 22:15:41.627143    4917 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 22:15:41.627169    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:41.903740    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:41.952256    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 22:15:42.131627    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:42.193341    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.868378864s)
	I0919 22:15:42.193383    4917 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-497709"
	I0919 22:15:42.193507    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.533263887s)
	W0919 22:15:42.193552    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:42.193578    4917 retry.go:31] will retry after 493.906242ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:42.198876    4917 out.go:179] * Verifying csi-hostpath-driver addon...
	I0919 22:15:42.202801    4917 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 22:15:42.213376    4917 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 22:15:42.213442    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:42.396965    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:42.605097    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:42.688417    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:42.707605    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:42.891231    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:43.110847    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:43.210329    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:43.391701    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:43.604070    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:43.706208    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0919 22:15:43.862880    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:15:43.890870    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:44.104144    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:44.205924    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:44.391089    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:44.604513    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:44.708689    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:44.797039    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.844730591s)
	I0919 22:15:44.797151    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.108700552s)
	W0919 22:15:44.797183    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:44.797205    4917 retry.go:31] will retry after 607.190203ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:44.891001    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:45.107704    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:45.207864    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:45.392848    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:45.405216    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:45.513868    4917 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 22:15:45.513957    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:45.536030    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	I0919 22:15:45.606981    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:45.691133    4917 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 22:15:45.706932    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:45.711181    4917 addons.go:238] Setting addon gcp-auth=true in "addons-497709"
	I0919 22:15:45.711242    4917 host.go:66] Checking if "addons-497709" exists ...
	I0919 22:15:45.711777    4917 cli_runner.go:164] Run: docker container inspect addons-497709 --format={{.State.Status}}
	I0919 22:15:45.736087    4917 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 22:15:45.736138    4917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-497709
	I0919 22:15:45.758130    4917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/addons-497709/id_rsa Username:docker}
	W0919 22:15:45.863009    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:15:45.891067    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:46.105131    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:46.207048    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0919 22:15:46.308915    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:46.308945    4917 retry.go:31] will retry after 822.932921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:46.312720    4917 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0919 22:15:46.315596    4917 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0919 22:15:46.318394    4917 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 22:15:46.318420    4917 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 22:15:46.336931    4917 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 22:15:46.337001    4917 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 22:15:46.355675    4917 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 22:15:46.355697    4917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 22:15:46.375345    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 22:15:46.391976    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:46.604356    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:46.713259    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:46.842434    4917 addons.go:479] Verifying addon gcp-auth=true in "addons-497709"
	I0919 22:15:46.845681    4917 out.go:179] * Verifying gcp-auth addon...
	I0919 22:15:46.848832    4917 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 22:15:46.853146    4917 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 22:15:46.853169    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:46.891053    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:47.104456    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:47.132770    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:47.207974    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:47.353659    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:47.392202    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:47.605281    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:47.705976    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:47.852804    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:47.864027    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:15:47.890843    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0919 22:15:47.938548    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:47.938591    4917 retry.go:31] will retry after 1.188797539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:48.104460    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:48.205964    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:48.351591    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:48.391517    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:48.604791    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:48.706980    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:48.852107    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:48.890838    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:49.104096    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:49.128096    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:49.206311    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:49.352608    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:49.391657    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:49.604276    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:49.710703    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:49.854207    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:49.892103    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0919 22:15:49.928315    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:49.928359    4917 retry.go:31] will retry after 1.641987041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:50.104716    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:50.205574    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:50.352250    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:50.363528    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:15:50.391484    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:50.605014    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:50.706333    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:50.852170    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:50.890881    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:51.104050    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:51.206432    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:51.352441    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:51.390952    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:51.571318    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:51.604693    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:51.706564    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:51.852130    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:51.892619    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:52.105037    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:52.207336    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:52.352155    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:52.363846    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	W0919 22:15:52.372904    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:52.372978    4917 retry.go:31] will retry after 1.913876357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:52.391011    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:52.604303    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:52.709343    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:52.851951    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:52.891709    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:53.104727    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:53.205398    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:53.352550    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:53.390864    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:53.604736    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:53.706329    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:53.852121    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:53.891460    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:54.104406    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:54.206188    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:54.287591    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:54.352734    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:54.367610    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:15:54.391561    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:54.604929    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:54.706461    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:54.861006    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:54.891477    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:55.104564    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0919 22:15:55.118529    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:55.118559    4917 retry.go:31] will retry after 3.132258787s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:55.206323    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:55.352090    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:55.391539    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:55.604589    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:55.706146    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:55.852147    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:55.891449    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:56.104431    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:56.206208    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:56.352114    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:56.391437    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:56.605020    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:56.705968    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:56.851521    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:56.863256    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:15:56.891169    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:57.104497    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:57.205948    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:57.352235    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:57.390714    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:57.603706    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:57.705630    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:57.852533    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:57.891005    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:58.104143    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:58.205871    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:58.251152    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:58.352642    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:58.391522    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:58.605254    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:58.707410    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:58.853099    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:58.891088    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0919 22:15:59.080101    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:59.080149    4917 retry.go:31] will retry after 5.152962288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:59.112877    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:59.205991    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:59.351729    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:59.363388    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:15:59.391131    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:59.604234    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:59.705809    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:59.851527    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:59.891175    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:00.106511    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:00.212274    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:00.352813    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:00.391676    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:00.603818    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:00.706885    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:00.851765    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:00.891207    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:01.104790    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:01.206938    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:01.351884    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:16:01.364016    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:16:01.390706    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:01.603854    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:01.705674    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:01.852749    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:01.891607    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:02.105068    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:02.205281    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:02.352548    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:02.390721    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:02.604576    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:02.706867    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:02.851779    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:02.891400    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:03.104846    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:03.206809    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:03.351879    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:03.391361    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:03.604855    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:03.706384    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:03.852395    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:16:03.863228    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:16:03.891094    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:04.104642    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:04.205572    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:04.233841    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:16:04.352168    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:04.391299    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:04.604816    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:04.706792    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:04.852568    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:04.890252    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0919 22:16:05.035628    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:05.035660    4917 retry.go:31] will retry after 6.645145217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:05.104737    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:05.206570    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:05.352530    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:05.390723    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:05.604915    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:05.705972    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:05.851755    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:16:05.863511    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:16:05.891405    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:06.104544    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:06.206406    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:06.352162    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:06.390797    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:06.603626    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:06.706454    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:06.852207    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:06.891493    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:07.104477    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:07.206329    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:07.352511    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:07.390942    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:07.604177    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:07.706213    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:07.853103    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:07.891433    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:08.104590    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:08.206396    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:08.352467    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:16:08.364217    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:16:08.391223    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:08.604639    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:08.706433    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:08.852406    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:08.890862    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:09.104113    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:09.206123    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:09.352518    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:09.391053    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:09.604323    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:09.706253    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:09.852277    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:09.890665    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:10.104754    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:10.205567    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:10.352668    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:10.391148    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:10.604312    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:10.706086    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:10.851941    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:16:10.863060    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:16:10.890963    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:11.104354    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:11.206316    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:11.352225    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:11.390649    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:11.604769    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:11.680994    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:16:11.706684    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:11.851748    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:11.892334    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:12.104851    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:12.206805    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:12.352255    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:12.392132    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0919 22:16:12.482532    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:12.482562    4917 retry.go:31] will retry after 20.366039008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:12.604563    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:12.706340    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:12.852346    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:12.890899    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:13.103918    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:13.206073    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:13.352192    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:16:13.362851    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:16:13.390534    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:13.604683    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:13.705493    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:13.852356    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:13.890852    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:14.104129    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:14.205695    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:14.351548    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:14.390947    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:14.603888    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:14.707140    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:14.851817    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:14.891164    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:15.104235    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:15.206160    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:15.352138    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:16:15.362966    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:16:15.391561    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:15.604640    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:15.706653    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:15.852219    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:15.891468    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:16.105156    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:16.205671    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:16.352621    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:16.391423    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:16.604884    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:16.705745    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:16.851784    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:16.891070    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:17.104246    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:17.206329    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:17.352292    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:16:17.363069    4917 node_ready.go:57] node "addons-497709" has "Ready":"False" status (will retry)
	I0919 22:16:17.391098    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:17.604055    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:17.705659    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:17.859477    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:17.891511    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:18.104759    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:18.255862    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:18.384594    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:18.388442    4917 node_ready.go:49] node "addons-497709" is "Ready"
	I0919 22:16:18.388489    4917 node_ready.go:38] duration metric: took 39.028583753s for node "addons-497709" to be "Ready" ...
	I0919 22:16:18.388504    4917 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:16:18.388599    4917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:16:18.405589    4917 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 22:16:18.405623    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:18.413724    4917 api_server.go:72] duration metric: took 43.003787559s to wait for apiserver process to appear ...
	I0919 22:16:18.413760    4917 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:16:18.413778    4917 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:16:18.425471    4917 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:16:18.426873    4917 api_server.go:141] control plane version: v1.34.0
	I0919 22:16:18.426910    4917 api_server.go:131] duration metric: took 13.142789ms to wait for apiserver health ...
	I0919 22:16:18.426919    4917 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:16:18.452121    4917 system_pods.go:59] 19 kube-system pods found
	I0919 22:16:18.452161    4917 system_pods.go:61] "coredns-66bc5c9577-l4hcz" [8c3d2c42-7002-41a3-b6fa-1fa130f83384] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:16:18.452177    4917 system_pods.go:61] "csi-hostpath-attacher-0" [6eff11a7-f0a1-4d6f-897e-a4fc494af6ff] Pending
	I0919 22:16:18.452184    4917 system_pods.go:61] "csi-hostpath-resizer-0" [f4963b1d-2825-4835-b6e7-61e29ca115a3] Pending
	I0919 22:16:18.452189    4917 system_pods.go:61] "csi-hostpathplugin-r89dx" [a4f99a7c-4d62-467a-aa90-aced24719f27] Pending
	I0919 22:16:18.452210    4917 system_pods.go:61] "etcd-addons-497709" [cce26e24-22fd-4e8b-823f-7fd7e7151801] Running
	I0919 22:16:18.452217    4917 system_pods.go:61] "kindnet-6rhw9" [1dc80cd5-2b7a-41a1-ae30-9d906383d6f4] Running
	I0919 22:16:18.452231    4917 system_pods.go:61] "kube-apiserver-addons-497709" [aace0e6c-d22e-44e8-8de2-a2c440d35133] Running
	I0919 22:16:18.452236    4917 system_pods.go:61] "kube-controller-manager-addons-497709" [e11ba4f7-7e67-478a-8d8d-0174360f4842] Running
	I0919 22:16:18.452241    4917 system_pods.go:61] "kube-ingress-dns-minikube" [b82509ea-da92-4061-86c7-f18978b498f8] Pending
	I0919 22:16:18.452280    4917 system_pods.go:61] "kube-proxy-mc88b" [2dcd0b40-5614-4ac0-a5c3-5e6e2508a43a] Running
	I0919 22:16:18.452291    4917 system_pods.go:61] "kube-scheduler-addons-497709" [7ced1b2f-87da-487c-8e5f-df8977d55e23] Running
	I0919 22:16:18.452296    4917 system_pods.go:61] "metrics-server-85b7d694d7-xh9kv" [8789fcd4-548d-41f1-add8-a41bac88d2ee] Pending
	I0919 22:16:18.452301    4917 system_pods.go:61] "nvidia-device-plugin-daemonset-k5jwt" [30c55f34-f5da-4ea2-a567-585f81ada4f1] Pending
	I0919 22:16:18.452305    4917 system_pods.go:61] "registry-66898fdd98-9bs6l" [fddc0661-55c1-4ea7-8f3f-96c0da3e6157] Pending
	I0919 22:16:18.452316    4917 system_pods.go:61] "registry-creds-764b6fb674-m559k" [e6509517-819b-45f8-b449-8c61b4abf737] Pending
	I0919 22:16:18.452320    4917 system_pods.go:61] "registry-proxy-vw4wm" [517a7877-3698-4b63-bb25-804a2404813a] Pending
	I0919 22:16:18.452324    4917 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rwhtl" [4999b2ff-0962-46e5-93e4-e08bd8abd3a7] Pending
	I0919 22:16:18.452328    4917 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xgvlt" [6eed8e57-a08a-4d4a-87d3-57551e309d7f] Pending
	I0919 22:16:18.452352    4917 system_pods.go:61] "storage-provisioner" [96f1fe53-326a-4e75-9c61-99d1d5acc12d] Pending
	I0919 22:16:18.452358    4917 system_pods.go:74] duration metric: took 25.432702ms to wait for pod list to return data ...
	I0919 22:16:18.452366    4917 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:16:18.471309    4917 default_sa.go:45] found service account: "default"
	I0919 22:16:18.471337    4917 default_sa.go:55] duration metric: took 18.963639ms for default service account to be created ...
	I0919 22:16:18.471347    4917 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:16:18.574119    4917 system_pods.go:86] 19 kube-system pods found
	I0919 22:16:18.574165    4917 system_pods.go:89] "coredns-66bc5c9577-l4hcz" [8c3d2c42-7002-41a3-b6fa-1fa130f83384] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:16:18.574173    4917 system_pods.go:89] "csi-hostpath-attacher-0" [6eff11a7-f0a1-4d6f-897e-a4fc494af6ff] Pending
	I0919 22:16:18.574198    4917 system_pods.go:89] "csi-hostpath-resizer-0" [f4963b1d-2825-4835-b6e7-61e29ca115a3] Pending
	I0919 22:16:18.574210    4917 system_pods.go:89] "csi-hostpathplugin-r89dx" [a4f99a7c-4d62-467a-aa90-aced24719f27] Pending
	I0919 22:16:18.574214    4917 system_pods.go:89] "etcd-addons-497709" [cce26e24-22fd-4e8b-823f-7fd7e7151801] Running
	I0919 22:16:18.574219    4917 system_pods.go:89] "kindnet-6rhw9" [1dc80cd5-2b7a-41a1-ae30-9d906383d6f4] Running
	I0919 22:16:18.574224    4917 system_pods.go:89] "kube-apiserver-addons-497709" [aace0e6c-d22e-44e8-8de2-a2c440d35133] Running
	I0919 22:16:18.574276    4917 system_pods.go:89] "kube-controller-manager-addons-497709" [e11ba4f7-7e67-478a-8d8d-0174360f4842] Running
	I0919 22:16:18.574288    4917 system_pods.go:89] "kube-ingress-dns-minikube" [b82509ea-da92-4061-86c7-f18978b498f8] Pending
	I0919 22:16:18.574293    4917 system_pods.go:89] "kube-proxy-mc88b" [2dcd0b40-5614-4ac0-a5c3-5e6e2508a43a] Running
	I0919 22:16:18.574297    4917 system_pods.go:89] "kube-scheduler-addons-497709" [7ced1b2f-87da-487c-8e5f-df8977d55e23] Running
	I0919 22:16:18.574301    4917 system_pods.go:89] "metrics-server-85b7d694d7-xh9kv" [8789fcd4-548d-41f1-add8-a41bac88d2ee] Pending
	I0919 22:16:18.574305    4917 system_pods.go:89] "nvidia-device-plugin-daemonset-k5jwt" [30c55f34-f5da-4ea2-a567-585f81ada4f1] Pending
	I0919 22:16:18.574309    4917 system_pods.go:89] "registry-66898fdd98-9bs6l" [fddc0661-55c1-4ea7-8f3f-96c0da3e6157] Pending
	I0919 22:16:18.574313    4917 system_pods.go:89] "registry-creds-764b6fb674-m559k" [e6509517-819b-45f8-b449-8c61b4abf737] Pending
	I0919 22:16:18.574321    4917 system_pods.go:89] "registry-proxy-vw4wm" [517a7877-3698-4b63-bb25-804a2404813a] Pending
	I0919 22:16:18.574338    4917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rwhtl" [4999b2ff-0962-46e5-93e4-e08bd8abd3a7] Pending
	I0919 22:16:18.574357    4917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xgvlt" [6eed8e57-a08a-4d4a-87d3-57551e309d7f] Pending
	I0919 22:16:18.574367    4917 system_pods.go:89] "storage-provisioner" [96f1fe53-326a-4e75-9c61-99d1d5acc12d] Pending
	I0919 22:16:18.574381    4917 retry.go:31] will retry after 272.946503ms: missing components: kube-dns
	I0919 22:16:18.635628    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:18.742126    4917 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 22:16:18.742150    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:18.859168    4917 system_pods.go:86] 19 kube-system pods found
	I0919 22:16:18.859203    4917 system_pods.go:89] "coredns-66bc5c9577-l4hcz" [8c3d2c42-7002-41a3-b6fa-1fa130f83384] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:16:18.859239    4917 system_pods.go:89] "csi-hostpath-attacher-0" [6eff11a7-f0a1-4d6f-897e-a4fc494af6ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 22:16:18.859254    4917 system_pods.go:89] "csi-hostpath-resizer-0" [f4963b1d-2825-4835-b6e7-61e29ca115a3] Pending
	I0919 22:16:18.859261    4917 system_pods.go:89] "csi-hostpathplugin-r89dx" [a4f99a7c-4d62-467a-aa90-aced24719f27] Pending
	I0919 22:16:18.859271    4917 system_pods.go:89] "etcd-addons-497709" [cce26e24-22fd-4e8b-823f-7fd7e7151801] Running
	I0919 22:16:18.859276    4917 system_pods.go:89] "kindnet-6rhw9" [1dc80cd5-2b7a-41a1-ae30-9d906383d6f4] Running
	I0919 22:16:18.859280    4917 system_pods.go:89] "kube-apiserver-addons-497709" [aace0e6c-d22e-44e8-8de2-a2c440d35133] Running
	I0919 22:16:18.859304    4917 system_pods.go:89] "kube-controller-manager-addons-497709" [e11ba4f7-7e67-478a-8d8d-0174360f4842] Running
	I0919 22:16:18.859314    4917 system_pods.go:89] "kube-ingress-dns-minikube" [b82509ea-da92-4061-86c7-f18978b498f8] Pending
	I0919 22:16:18.859319    4917 system_pods.go:89] "kube-proxy-mc88b" [2dcd0b40-5614-4ac0-a5c3-5e6e2508a43a] Running
	I0919 22:16:18.859323    4917 system_pods.go:89] "kube-scheduler-addons-497709" [7ced1b2f-87da-487c-8e5f-df8977d55e23] Running
	I0919 22:16:18.859330    4917 system_pods.go:89] "metrics-server-85b7d694d7-xh9kv" [8789fcd4-548d-41f1-add8-a41bac88d2ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 22:16:18.859338    4917 system_pods.go:89] "nvidia-device-plugin-daemonset-k5jwt" [30c55f34-f5da-4ea2-a567-585f81ada4f1] Pending
	I0919 22:16:18.859343    4917 system_pods.go:89] "registry-66898fdd98-9bs6l" [fddc0661-55c1-4ea7-8f3f-96c0da3e6157] Pending
	I0919 22:16:18.859350    4917 system_pods.go:89] "registry-creds-764b6fb674-m559k" [e6509517-819b-45f8-b449-8c61b4abf737] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0919 22:16:18.859358    4917 system_pods.go:89] "registry-proxy-vw4wm" [517a7877-3698-4b63-bb25-804a2404813a] Pending
	I0919 22:16:18.859363    4917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rwhtl" [4999b2ff-0962-46e5-93e4-e08bd8abd3a7] Pending
	I0919 22:16:18.859389    4917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xgvlt" [6eed8e57-a08a-4d4a-87d3-57551e309d7f] Pending
	I0919 22:16:18.859396    4917 system_pods.go:89] "storage-provisioner" [96f1fe53-326a-4e75-9c61-99d1d5acc12d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:16:18.859426    4917 retry.go:31] will retry after 277.066187ms: missing components: kube-dns
	I0919 22:16:18.917167    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:18.924172    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:19.119402    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:19.164299    4917 system_pods.go:86] 19 kube-system pods found
	I0919 22:16:19.164345    4917 system_pods.go:89] "coredns-66bc5c9577-l4hcz" [8c3d2c42-7002-41a3-b6fa-1fa130f83384] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:16:19.164356    4917 system_pods.go:89] "csi-hostpath-attacher-0" [6eff11a7-f0a1-4d6f-897e-a4fc494af6ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 22:16:19.164363    4917 system_pods.go:89] "csi-hostpath-resizer-0" [f4963b1d-2825-4835-b6e7-61e29ca115a3] Pending
	I0919 22:16:19.164370    4917 system_pods.go:89] "csi-hostpathplugin-r89dx" [a4f99a7c-4d62-467a-aa90-aced24719f27] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 22:16:19.164391    4917 system_pods.go:89] "etcd-addons-497709" [cce26e24-22fd-4e8b-823f-7fd7e7151801] Running
	I0919 22:16:19.164410    4917 system_pods.go:89] "kindnet-6rhw9" [1dc80cd5-2b7a-41a1-ae30-9d906383d6f4] Running
	I0919 22:16:19.164415    4917 system_pods.go:89] "kube-apiserver-addons-497709" [aace0e6c-d22e-44e8-8de2-a2c440d35133] Running
	I0919 22:16:19.164420    4917 system_pods.go:89] "kube-controller-manager-addons-497709" [e11ba4f7-7e67-478a-8d8d-0174360f4842] Running
	I0919 22:16:19.164432    4917 system_pods.go:89] "kube-ingress-dns-minikube" [b82509ea-da92-4061-86c7-f18978b498f8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 22:16:19.164436    4917 system_pods.go:89] "kube-proxy-mc88b" [2dcd0b40-5614-4ac0-a5c3-5e6e2508a43a] Running
	I0919 22:16:19.164479    4917 system_pods.go:89] "kube-scheduler-addons-497709" [7ced1b2f-87da-487c-8e5f-df8977d55e23] Running
	I0919 22:16:19.164492    4917 system_pods.go:89] "metrics-server-85b7d694d7-xh9kv" [8789fcd4-548d-41f1-add8-a41bac88d2ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 22:16:19.164497    4917 system_pods.go:89] "nvidia-device-plugin-daemonset-k5jwt" [30c55f34-f5da-4ea2-a567-585f81ada4f1] Pending
	I0919 22:16:19.164504    4917 system_pods.go:89] "registry-66898fdd98-9bs6l" [fddc0661-55c1-4ea7-8f3f-96c0da3e6157] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 22:16:19.164514    4917 system_pods.go:89] "registry-creds-764b6fb674-m559k" [e6509517-819b-45f8-b449-8c61b4abf737] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0919 22:16:19.164519    4917 system_pods.go:89] "registry-proxy-vw4wm" [517a7877-3698-4b63-bb25-804a2404813a] Pending
	I0919 22:16:19.164523    4917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rwhtl" [4999b2ff-0962-46e5-93e4-e08bd8abd3a7] Pending
	I0919 22:16:19.164534    4917 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xgvlt" [6eed8e57-a08a-4d4a-87d3-57551e309d7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:16:19.164564    4917 system_pods.go:89] "storage-provisioner" [96f1fe53-326a-4e75-9c61-99d1d5acc12d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:16:19.164590    4917 system_pods.go:126] duration metric: took 693.227052ms to wait for k8s-apps to be running ...
	I0919 22:16:19.164605    4917 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:16:19.164689    4917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:16:19.214405    4917 system_svc.go:56] duration metric: took 49.791215ms WaitForService to wait for kubelet
	I0919 22:16:19.214436    4917 kubeadm.go:578] duration metric: took 43.804503712s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:16:19.214488    4917 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:16:19.239303    4917 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0919 22:16:19.239345    4917 node_conditions.go:123] node cpu capacity is 2
	I0919 22:16:19.239379    4917 node_conditions.go:105] duration metric: took 24.883411ms to run NodePressure ...
	I0919 22:16:19.239400    4917 start.go:241] waiting for startup goroutines ...
	I0919 22:16:19.271188    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:19.363157    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:19.443645    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:19.605015    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:19.705919    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:19.852245    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:19.891947    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:20.105944    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:20.206375    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:20.352619    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:20.391897    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:20.604702    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:20.705666    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:20.851740    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:20.891821    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:21.105071    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:21.206202    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:21.352294    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:21.391731    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:21.604487    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:21.707126    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:21.852598    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:21.892663    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:22.105876    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:22.206574    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:22.353305    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:22.391875    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:22.604530    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:22.707647    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:22.852472    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:22.891283    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:23.105867    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:23.208997    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:23.352846    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:23.392569    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:23.605174    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:23.709724    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:23.853087    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:23.891115    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:24.104935    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:24.206683    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:24.364030    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:24.392139    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:24.611112    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:24.707276    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:24.869858    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:24.913690    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:25.105453    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:25.214817    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:25.357786    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:25.392393    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:25.607103    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:25.706332    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:25.856360    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:25.891593    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:26.105075    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:26.208198    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:26.357320    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:26.391124    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:26.604816    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:26.707861    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:26.859262    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:26.891486    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:27.105524    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:27.209978    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:27.352461    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:27.391254    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:27.604937    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:27.706950    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:27.851852    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:27.890932    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:28.104125    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:28.206292    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:28.352488    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:28.391986    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:28.611001    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:28.706942    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:28.852459    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:28.891695    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:29.106161    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:29.206724    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:29.352165    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:29.391058    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:29.605293    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:29.706692    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:29.851606    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:29.891146    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:30.104742    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:30.206563    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:30.355565    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:30.391773    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:30.604417    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:30.706478    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:30.853107    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:30.890940    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:31.104238    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:31.206495    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:31.352904    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:31.391411    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:31.606100    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:31.706971    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:31.852414    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:31.891604    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:32.106447    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:32.207537    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:32.352211    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:32.391819    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:32.604417    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:32.707293    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:32.849476    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:16:32.853929    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:32.893664    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:33.104158    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:33.207027    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:33.353117    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:33.391153    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:33.609403    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:33.706926    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:33.852086    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:33.891695    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:34.109910    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.260378511s)
	W0919 22:16:34.109950    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:34.109979    4917 retry.go:31] will retry after 23.113306534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:34.113927    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:34.210284    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:34.351913    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:34.391826    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:34.605241    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:34.706419    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:34.852379    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:34.891332    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:35.105347    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:35.207330    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:35.352637    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:35.391515    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:35.629108    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:35.726731    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:35.851541    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:35.892139    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:36.139787    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:36.221995    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:36.354000    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:36.430123    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:36.606083    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:36.706958    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:36.853079    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:36.892228    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:37.105183    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:37.211195    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:37.352875    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:37.392780    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:37.604124    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:37.708048    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:37.852769    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:37.892817    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:38.104611    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:38.206739    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:38.351960    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:38.393242    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:38.608765    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:38.707027    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:38.852363    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:38.891629    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:39.104267    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:39.207493    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:39.352996    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:39.391221    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:39.604668    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:39.707184    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:39.852454    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:39.891544    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:40.105304    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:40.207062    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:40.356672    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:40.457883    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:40.604039    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:40.707592    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:40.853175    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:40.891599    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:41.105633    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:41.211174    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:41.352780    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:41.391901    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:41.603950    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:41.709387    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:41.851874    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:41.891835    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:42.106916    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:42.207049    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:42.352670    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:42.391426    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:42.606314    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:42.706687    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:42.851691    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:42.891580    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:43.103672    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:43.206776    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:43.352876    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:43.391426    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:43.604578    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:43.706730    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:43.852616    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:43.891825    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:44.104214    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:44.206306    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:44.353207    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:44.392676    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:44.603562    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:44.706729    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:44.852355    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:44.891014    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:45.113642    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:45.209090    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:45.358830    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:45.393125    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:45.604999    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:45.710586    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:45.853037    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:45.893818    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:46.104270    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:46.206310    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:46.353125    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:46.392348    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:46.605728    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:46.705957    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:46.852470    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:46.891795    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:47.104571    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:47.207502    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:47.352592    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:47.391650    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:47.630379    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:47.711130    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:47.853206    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:47.891147    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:48.105031    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:48.206044    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:48.355998    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:48.455363    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:48.605929    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:48.706342    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:48.852631    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:48.891928    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:49.104240    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:49.206970    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:49.352171    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:49.394610    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:49.603589    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:49.707195    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:49.851907    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:49.941880    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:50.105685    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:50.207000    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:50.352081    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:50.391419    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:50.605054    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:50.706493    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:50.852626    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:50.891753    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:51.105969    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:51.226149    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:51.352455    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:51.392004    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:51.604635    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:51.707097    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:51.852804    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:51.891647    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:52.120108    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:52.217559    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:52.353613    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:52.392239    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:52.604517    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:52.714781    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:52.852849    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:52.891266    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:53.107677    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:53.207328    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:53.353706    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:53.391607    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:53.604320    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:53.707558    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:53.853944    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:53.891554    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:54.105687    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:54.206480    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:54.352652    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:54.391832    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:54.604320    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:54.708669    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:54.853283    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:54.892641    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:55.104340    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:55.209060    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:55.352404    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:55.391831    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:55.605105    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:55.706402    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:55.860210    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:55.955749    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:56.104861    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:56.206741    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:56.352087    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:56.391367    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:56.604935    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:56.706092    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:56.852129    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:56.894312    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:57.107529    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:57.207990    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:57.224288    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:16:57.352912    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:57.394610    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:57.605073    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:57.706706    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:57.922344    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:57.962340    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:58.109446    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:58.233031    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:58.352376    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:58.391932    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:58.606183    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:58.707189    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:58.852497    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:58.891581    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:59.086851    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.862530074s)
	W0919 22:16:59.086887    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:59.086908    4917 retry.go:31] will retry after 31.249091786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:59.105659    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:59.208546    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:59.352797    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:59.391685    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:59.604160    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:59.706666    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:59.853243    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:59.891789    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:00.145708    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:00.209386    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:00.353540    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:00.392499    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:00.606001    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:00.706673    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:00.852518    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:00.953392    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:01.105752    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:01.209604    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:01.357295    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:01.455204    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:01.605186    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:01.711281    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:01.856165    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:01.891837    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:02.108244    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:02.207025    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:02.352909    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:02.391843    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:02.604828    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:02.707522    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:02.852787    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:02.891647    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:03.109775    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:03.227413    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:03.358698    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:03.460587    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:03.604675    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:03.709901    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:03.852097    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:03.891187    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:04.110376    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:04.208216    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:04.354676    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:04.394124    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:04.605238    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:04.707466    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:04.853156    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:04.892059    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:05.105638    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:05.206854    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:05.354419    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:05.455593    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:05.605057    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:05.706368    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:05.852313    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:05.891439    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:06.111697    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:06.212341    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:06.352919    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:06.391508    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:06.604923    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:06.707414    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:06.852839    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:06.892160    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:07.105098    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:07.206301    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:07.356008    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:07.453862    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:07.604408    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:07.712384    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:07.853795    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:07.891583    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:08.111995    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:08.206014    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:08.352131    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:08.391235    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:08.605461    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:08.706765    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:08.851731    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:08.891970    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:09.103753    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:09.206116    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:09.352639    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:09.391390    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:09.604392    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:09.706706    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:09.851643    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:09.891474    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:10.104982    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:10.209281    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:10.355333    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:10.391121    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:10.604348    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:10.706720    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:10.852713    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:10.892708    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:11.106315    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:11.208674    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:11.354409    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:11.392549    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:11.604575    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:11.706763    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:11.851706    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:11.891654    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:12.105692    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:12.205818    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:12.354365    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:12.393595    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:12.609580    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:12.707806    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:12.852440    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:12.893092    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:17:13.110414    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:13.208271    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:13.352425    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:13.392037    4917 kapi.go:107] duration metric: took 1m33.004122166s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 22:17:13.604541    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:13.706419    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:13.852040    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:14.109348    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:14.211728    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:14.353010    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:14.605012    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:14.709157    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:14.852251    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:15.104227    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:15.208250    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:15.358581    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:15.604100    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:15.707337    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:15.852091    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:16.108595    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:16.207566    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:16.352541    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:16.607021    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:16.706730    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:16.852712    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:17.104432    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:17.206670    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:17.351779    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:17.613099    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:17.707001    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:17.867336    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:18.110107    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:18.206996    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:18.351753    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:18.604424    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:18.706574    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:18.853076    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:19.104666    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:19.207250    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:19.352421    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:19.605267    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:19.707934    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:19.854254    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:20.105106    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:20.206601    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:20.355139    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:20.605160    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:20.708273    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:20.852159    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:21.108720    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:21.217892    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:21.352567    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:21.607739    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:21.708503    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:21.853457    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:22.105370    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:22.207462    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:22.355950    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:22.603960    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:22.713096    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:22.852417    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:23.105326    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:23.206488    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:23.352672    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:23.604330    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:23.707317    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:23.856726    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:24.104927    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:24.206498    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:24.352479    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:24.604566    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:24.707285    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:24.852819    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:25.109637    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:25.207046    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:25.352638    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:25.606576    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:25.707210    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:25.878848    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:26.137937    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:26.217890    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:26.352558    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:26.604749    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:26.706269    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:26.853274    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:27.104922    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:27.207128    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:27.352604    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:27.604088    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:27.707052    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:27.851880    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:28.105241    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:28.207277    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:28.362918    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:28.606615    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:28.707001    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:28.852323    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:29.104948    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:29.206863    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:29.352935    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:29.604256    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:29.707465    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:29.856329    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:30.104972    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:30.206493    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:30.336540    4917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:17:30.355162    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:30.614144    4917 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:17:30.716867    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:30.851847    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:31.105623    4917 kapi.go:107] duration metric: took 1m49.504839844s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 22:17:31.207095    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:31.352837    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:31.398523    4917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.061927704s)
	W0919 22:17:31.398569    4917 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 22:17:31.398660    4917 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0919 22:17:31.705984    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:31.852186    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:32.206594    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:32.352707    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:32.706346    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:32.852239    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:33.207838    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:33.351912    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:33.707534    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:33.852972    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:34.209199    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:34.352965    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:34.706586    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:34.852847    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:35.207332    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:35.353189    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:17:35.706820    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:35.851760    4917 kapi.go:107] duration metric: took 1m49.002932578s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 22:17:35.853077    4917 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-497709 cluster.
	I0919 22:17:35.854161    4917 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 22:17:35.855345    4917 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 22:17:36.206682    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:36.711013    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:37.207312    4917 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:17:37.709765    4917 kapi.go:107] duration metric: took 1m55.506968341s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 22:17:37.711434    4917 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, amd-gpu-device-plugin, registry-creds, ingress-dns, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0919 22:17:37.713292    4917 addons.go:514] duration metric: took 2m2.303150414s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner amd-gpu-device-plugin registry-creds ingress-dns default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0919 22:17:37.713377    4917 start.go:246] waiting for cluster config update ...
	I0919 22:17:37.713412    4917 start.go:255] writing updated cluster config ...
	I0919 22:17:37.713752    4917 ssh_runner.go:195] Run: rm -f paused
	I0919 22:17:37.718942    4917 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:17:37.723364    4917 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l4hcz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:37.730723    4917 pod_ready.go:94] pod "coredns-66bc5c9577-l4hcz" is "Ready"
	I0919 22:17:37.730750    4917 pod_ready.go:86] duration metric: took 7.304265ms for pod "coredns-66bc5c9577-l4hcz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:37.733624    4917 pod_ready.go:83] waiting for pod "etcd-addons-497709" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:37.738182    4917 pod_ready.go:94] pod "etcd-addons-497709" is "Ready"
	I0919 22:17:37.738254    4917 pod_ready.go:86] duration metric: took 4.605376ms for pod "etcd-addons-497709" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:37.740578    4917 pod_ready.go:83] waiting for pod "kube-apiserver-addons-497709" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:37.746138    4917 pod_ready.go:94] pod "kube-apiserver-addons-497709" is "Ready"
	I0919 22:17:37.746165    4917 pod_ready.go:86] duration metric: took 5.567422ms for pod "kube-apiserver-addons-497709" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:37.748549    4917 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-497709" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:38.123146    4917 pod_ready.go:94] pod "kube-controller-manager-addons-497709" is "Ready"
	I0919 22:17:38.123174    4917 pod_ready.go:86] duration metric: took 374.599345ms for pod "kube-controller-manager-addons-497709" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:38.323781    4917 pod_ready.go:83] waiting for pod "kube-proxy-mc88b" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:38.722930    4917 pod_ready.go:94] pod "kube-proxy-mc88b" is "Ready"
	I0919 22:17:38.723009    4917 pod_ready.go:86] duration metric: took 399.203203ms for pod "kube-proxy-mc88b" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:38.924379    4917 pod_ready.go:83] waiting for pod "kube-scheduler-addons-497709" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:39.323330    4917 pod_ready.go:94] pod "kube-scheduler-addons-497709" is "Ready"
	I0919 22:17:39.323356    4917 pod_ready.go:86] duration metric: took 398.95369ms for pod "kube-scheduler-addons-497709" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:17:39.323369    4917 pod_ready.go:40] duration metric: took 1.604343655s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:17:39.723449    4917 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0919 22:17:39.724782    4917 out.go:179] * Done! kubectl is now configured to use "addons-497709" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 22:20:34 addons-497709 crio[989]: time="2025-09-19 22:20:34.472461416Z" level=info msg="Removed container 6c615d95e9727e1d94698674d4d42e9445e63a5a48279cb476729b991e665b3a: default/cloud-spanner-emulator-85f6b7fc65-smhwp/cloud-spanner-emulator" id=bf76cde8-b175-4181-a8d8-e5754e8266fa name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.190352081Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-x6gtr/POD" id=af90478c-4d52-4f74-948e-3dd7e60cbb31 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.190409197Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.234970377Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-x6gtr Namespace:default ID:8f34fe98ee6ef91fd8624b3a856fc1bd69c973a73aeec9c642474d525a2211c3 UID:be7778d9-c39a-4877-9e24-fb899ec4d4dc NetNS:/var/run/netns/65e7022c-37da-4a49-bff5-a8c85b5ca7fd Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.235030825Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-x6gtr to CNI network \"kindnet\" (type=ptp)"
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.245937195Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-x6gtr Namespace:default ID:8f34fe98ee6ef91fd8624b3a856fc1bd69c973a73aeec9c642474d525a2211c3 UID:be7778d9-c39a-4877-9e24-fb899ec4d4dc NetNS:/var/run/netns/65e7022c-37da-4a49-bff5-a8c85b5ca7fd Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.246081632Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-x6gtr for CNI network kindnet (type=ptp)"
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.253310942Z" level=info msg="Ran pod sandbox 8f34fe98ee6ef91fd8624b3a856fc1bd69c973a73aeec9c642474d525a2211c3 with infra container: default/hello-world-app-5d498dc89-x6gtr/POD" id=af90478c-4d52-4f74-948e-3dd7e60cbb31 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.255123535Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=264e04e6-1fa9-4c50-9013-27dc2ef83564 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.255333581Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=264e04e6-1fa9-4c50-9013-27dc2ef83564 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.257583160Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=4855027c-a631-43fe-8984-3ab173568379 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.260017415Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 19 22:20:56 addons-497709 crio[989]: time="2025-09-19 22:20:56.501583496Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.165839815Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=4855027c-a631-43fe-8984-3ab173568379 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.166393578Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=93df0f32-b29d-4143-b53a-4d93a10344fb name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.167025635Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=93df0f32-b29d-4143-b53a-4d93a10344fb name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.167994025Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3b47dbc4-ccd4-4647-9b38-3c72ce2a0e93 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.168600277Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3b47dbc4-ccd4-4647-9b38-3c72ce2a0e93 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.174453135Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-x6gtr/hello-world-app" id=378ed9a8-4488-4e05-a308-7969fac59232 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.174684440Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.199599037Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e478e70a9623e71581156a9ddc58adc67b5183b4bd3837910e9c34e67e415616/merged/etc/passwd: no such file or directory"
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.199649138Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e478e70a9623e71581156a9ddc58adc67b5183b4bd3837910e9c34e67e415616/merged/etc/group: no such file or directory"
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.261093773Z" level=info msg="Created container 3c3a9972980a33c36007258c01252232a900322c58a1355d75ec0a15559871aa: default/hello-world-app-5d498dc89-x6gtr/hello-world-app" id=378ed9a8-4488-4e05-a308-7969fac59232 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.262592886Z" level=info msg="Starting container: 3c3a9972980a33c36007258c01252232a900322c58a1355d75ec0a15559871aa" id=a4cea586-1320-42ca-ad61-ead7af7e783b name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:20:57 addons-497709 crio[989]: time="2025-09-19 22:20:57.269245581Z" level=info msg="Started container" PID=9917 containerID=3c3a9972980a33c36007258c01252232a900322c58a1355d75ec0a15559871aa description=default/hello-world-app-5d498dc89-x6gtr/hello-world-app id=a4cea586-1320-42ca-ad61-ead7af7e783b name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f34fe98ee6ef91fd8624b3a856fc1bd69c973a73aeec9c642474d525a2211c3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	3c3a9972980a3       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   8f34fe98ee6ef       hello-world-app-5d498dc89-x6gtr
	a4bfff218d6a7       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   5b7104eea8317       nginx
	8a63d4477ebf6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   220064de88885       busybox
	e80de67e9db72       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago            Running             controller                0                   b478f5aa95f45       ingress-nginx-controller-9cc49f96f-xspwv
	9f77bb69e74da       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            3 minutes ago            Running             gadget                    0                   469b37b03f196       gadget-xlwfp
	2b0012d007b4f       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                             3 minutes ago            Exited              patch                     2                   cacd688c8357f       ingress-nginx-admission-patch-cdqhf
	a851974d29d52       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago            Exited              create                    0                   6af972dba0bda       ingress-nginx-admission-create-ncdqb
	7097add69d373       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958               4 minutes ago            Running             minikube-ingress-dns      0                   90f5d4566b5aa       kube-ingress-dns-minikube
	afc005584780a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago            Running             storage-provisioner       0                   40c4c0d2b503d       storage-provisioner
	c6dc33fa7d243       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                             4 minutes ago            Running             coredns                   0                   0b4141543b2b5       coredns-66bc5c9577-l4hcz
	290d2628506a6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                             5 minutes ago            Running             kindnet-cni               0                   cebfce3590c91       kindnet-6rhw9
	67d01596103d9       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                             5 minutes ago            Running             kube-proxy                0                   ca30e6965e44c       kube-proxy-mc88b
	100bee9bdde9d       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                             5 minutes ago            Running             kube-scheduler            0                   7fb4fd4ff5e51       kube-scheduler-addons-497709
	d31b5cb15ecfe       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                             5 minutes ago            Running             kube-controller-manager   0                   9d2fcc7ae3b5f       kube-controller-manager-addons-497709
	6512c0435a83a       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                                             5 minutes ago            Running             kube-apiserver            0                   c7e6079a63ef4       kube-apiserver-addons-497709
	2faf3010ed70c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                             5 minutes ago            Running             etcd                      0                   0139b4599c5e2       etcd-addons-497709
	
	
	==> coredns [c6dc33fa7d2437bf583004bcf96bae70a87075a98f7858fdcc89faf00ec59fd2] <==
	[INFO] 10.244.0.16:41314 - 61817 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00227541s
	[INFO] 10.244.0.16:41314 - 43360 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000579823s
	[INFO] 10.244.0.16:41314 - 696 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00042778s
	[INFO] 10.244.0.16:32789 - 58795 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000186365s
	[INFO] 10.244.0.16:32789 - 59033 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000140425s
	[INFO] 10.244.0.16:53993 - 46390 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000236975s
	[INFO] 10.244.0.16:53993 - 46177 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000243622s
	[INFO] 10.244.0.16:34155 - 13464 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011082s
	[INFO] 10.244.0.16:34155 - 13637 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098274s
	[INFO] 10.244.0.16:34576 - 38639 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001537644s
	[INFO] 10.244.0.16:34576 - 38820 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002061827s
	[INFO] 10.244.0.16:50601 - 19479 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000101614s
	[INFO] 10.244.0.16:50601 - 19331 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000219236s
	[INFO] 10.244.0.21:60702 - 62844 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184248s
	[INFO] 10.244.0.21:37453 - 7603 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00008115s
	[INFO] 10.244.0.21:54389 - 50768 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000202907s
	[INFO] 10.244.0.21:48798 - 60733 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00020079s
	[INFO] 10.244.0.21:40469 - 42904 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016733s
	[INFO] 10.244.0.21:33900 - 40447 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115981s
	[INFO] 10.244.0.21:50855 - 44966 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005502577s
	[INFO] 10.244.0.21:54004 - 60063 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005590217s
	[INFO] 10.244.0.21:58886 - 650 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000891499s
	[INFO] 10.244.0.21:34929 - 7495 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002822994s
	[INFO] 10.244.0.24:48307 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174402s
	[INFO] 10.244.0.24:40041 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000088001s
	
	
	==> describe nodes <==
	Name:               addons-497709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-497709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=addons-497709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_15_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-497709
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:15:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-497709
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:20:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:19:35 +0000   Fri, 19 Sep 2025 22:15:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:19:35 +0000   Fri, 19 Sep 2025 22:15:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:19:35 +0000   Fri, 19 Sep 2025 22:15:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:19:35 +0000   Fri, 19 Sep 2025 22:16:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-497709
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 75b982de571046c39cd819207634628d
	  System UUID:                769fa01e-a28f-4200-9fff-42668a1fd686
	  Boot ID:                    7b79e4aa-7121-473c-883d-bf6a4cc4983e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	  default                     hello-world-app-5d498dc89-x6gtr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-xlwfp                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-xspwv    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m16s
	  kube-system                 coredns-66bc5c9577-l4hcz                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m22s
	  kube-system                 etcd-addons-497709                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m28s
	  kube-system                 kindnet-6rhw9                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m23s
	  kube-system                 kube-apiserver-addons-497709                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-controller-manager-addons-497709       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-mc88b                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-scheduler-addons-497709                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m16s  kube-proxy       
	  Normal   Starting                 5m28s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m28s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m28s  kubelet          Node addons-497709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m28s  kubelet          Node addons-497709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m28s  kubelet          Node addons-497709 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m23s  node-controller  Node addons-497709 event: Registered Node addons-497709 in Controller
	  Normal   NodeReady                4m39s  kubelet          Node addons-497709 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep19 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015869] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.460692] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026277] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.762371] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.311227] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [2faf3010ed70cd7a750533806393ed6ea760164cf5655c246731b649add1ad7c] <==
	{"level":"warn","ts":"2025-09-19T22:15:25.678751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:25.714836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:25.746910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:25.761448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:25.798770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:25.823477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:25.837875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:25.867757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:25.893761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:25.927655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:25.954010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:25.987945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:26.038082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:26.114334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35642","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:15:36.718053Z","caller":"traceutil/trace.go:172","msg":"trace[933426625] transaction","detail":"{read_only:false; response_revision:359; number_of_response:1; }","duration":"103.340061ms","start":"2025-09-19T22:15:36.614699Z","end":"2025-09-19T22:15:36.718039Z","steps":["trace[933426625] 'process raft request'  (duration: 103.236987ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:15:38.191191Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.307055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:15:38.208572Z","caller":"traceutil/trace.go:172","msg":"trace[755040087] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:362; }","duration":"189.708984ms","start":"2025-09-19T22:15:38.018839Z","end":"2025-09-19T22:15:38.208548Z","steps":["trace[755040087] 'agreement among raft nodes before linearized reading'  (duration: 51.169155ms)","trace[755040087] 'range keys from in-memory index tree'  (duration: 121.110725ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:15:38.203054Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.562399ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040082471712120 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-6rhw9\" mod_revision:315 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-6rhw9\" value_size:5328 >> failure:<request_range:<key:\"/registry/pods/kube-system/kindnet-6rhw9\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T22:15:38.211011Z","caller":"traceutil/trace.go:172","msg":"trace[1886575323] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"194.753602ms","start":"2025-09-19T22:15:38.016229Z","end":"2025-09-19T22:15:38.210983Z","steps":["trace[1886575323] 'process raft request'  (duration: 53.870719ms)","trace[1886575323] 'compare'  (duration: 132.420268ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:15:42.414338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:42.431058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:16:04.100221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:16:04.115039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:16:04.139934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:16:04.154726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49418","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:20:58 up  1:03,  0 users,  load average: 1.67, 1.23, 0.63
	Linux addons-497709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [290d2628506a6e783d377e99df83e5a458ecc4609e0212b78a62a3728920f48e] <==
	I0919 22:18:57.655474       1 main.go:301] handling current node
	I0919 22:19:07.655356       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:19:07.655387       1 main.go:301] handling current node
	I0919 22:19:17.658358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:19:17.658389       1 main.go:301] handling current node
	I0919 22:19:27.655683       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:19:27.655716       1 main.go:301] handling current node
	I0919 22:19:37.654901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:19:37.655059       1 main.go:301] handling current node
	I0919 22:19:47.657044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:19:47.657156       1 main.go:301] handling current node
	I0919 22:19:57.655131       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:19:57.655166       1 main.go:301] handling current node
	I0919 22:20:07.655566       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:20:07.655601       1 main.go:301] handling current node
	I0919 22:20:17.654908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:20:17.654941       1 main.go:301] handling current node
	I0919 22:20:27.655060       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:20:27.655179       1 main.go:301] handling current node
	I0919 22:20:37.654679       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:20:37.654836       1 main.go:301] handling current node
	I0919 22:20:47.658573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:20:47.658607       1 main.go:301] handling current node
	I0919 22:20:57.655144       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:20:57.655181       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6512c0435a83a9d17b7d22891c526ae04e6fdef29585bacd6aed806732338a86] <==
	I0919 22:18:04.576608       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.115.86"}
	I0919 22:18:34.886073       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0919 22:18:35.231353       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.199.95"}
	I0919 22:18:37.206736       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0919 22:18:52.382162       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0919 22:19:00.204191       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:19:09.522859       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0919 22:19:10.493343       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:19:10.493493       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 22:19:10.581257       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:19:10.588993       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 22:19:10.623368       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:19:10.623512       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 22:19:10.647741       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:19:10.647886       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 22:19:10.659699       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:19:10.659818       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0919 22:19:11.650156       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0919 22:19:11.660652       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0919 22:19:11.667242       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0919 22:19:12.486714       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:19:43.393485       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0919 22:20:16.627722       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:20:23.200109       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:20:56.080521       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.17.220"}
	
	
	==> kube-controller-manager [d31b5cb15ecfe598191643161e00de2174f62e2f7fd378cc8af3f04db9cf4ffd] <==
	E0919 22:19:20.997364       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:21.003499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:19:27.523994       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:27.525295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:19:32.809129       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:32.810397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:19:33.464442       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:33.465650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0919 22:19:34.227505       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0919 22:19:34.227541       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:19:34.278245       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0919 22:19:34.278423       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 22:19:44.728562       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:44.729656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:19:46.253073       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:46.254113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:19:48.357023       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:48.358464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:20:14.944671       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:20:14.945739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:20:26.987172       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:20:26.988537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0919 22:20:27.055076       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E0919 22:20:29.889756       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:20:29.890971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [67d01596103d9a1518cce0faeedf22a4b2769d14e3d054ad4171a978c3a3025e] <==
	I0919 22:15:40.276909       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:15:40.549962       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:15:40.773761       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:15:40.826846       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:15:40.826967       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:15:40.995397       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:15:40.995527       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:15:41.009901       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:15:41.015295       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:15:41.015661       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:15:41.018492       1 config.go:200] "Starting service config controller"
	I0919 22:15:41.018559       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:15:41.018603       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:15:41.018632       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:15:41.018685       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:15:41.018712       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:15:41.019296       1 config.go:309] "Starting node config controller"
	I0919 22:15:41.019348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:15:41.019379       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:15:41.121399       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:15:41.137219       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:15:41.156477       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [100bee9bdde9d35790ef80e5f93667e81a01b6c79ed876a60c1586f56e0ef542] <==
	I0919 22:15:28.162600       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:15:28.165257       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:15:28.165469       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:15:28.166133       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:15:28.166233       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0919 22:15:28.171577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:15:28.176931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:15:28.177447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0919 22:15:28.177622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 22:15:28.178471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 22:15:28.178595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:15:28.179551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:15:28.180697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:15:28.180796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:15:28.180907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:15:28.181010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:15:28.181102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:15:28.181118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:15:28.181158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 22:15:28.181246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:15:28.181269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 22:15:28.181329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:15:28.181389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:15:28.181478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I0919 22:15:29.766513       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:20:29 addons-497709 kubelet[1531]: E0919 22:20:29.815614    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ceaede3f8161fda13c4752409e2c3f1130e67bd1de78f4f4ee141b8d2f40a3e0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ceaede3f8161fda13c4752409e2c3f1130e67bd1de78f4f4ee141b8d2f40a3e0/diff: no such file or directory, extraDiskErr: <nil>
	Sep 19 22:20:29 addons-497709 kubelet[1531]: E0919 22:20:29.816910    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e2de47b66de077441286827a6f66ab4ffc28494488a05c61c84a3d46785d15bc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e2de47b66de077441286827a6f66ab4ffc28494488a05c61c84a3d46785d15bc/diff: no such file or directory, extraDiskErr: <nil>
	Sep 19 22:20:29 addons-497709 kubelet[1531]: E0919 22:20:29.818148    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/06d66d3c2edbd4a4c930193d2cf5754240c442388230912fd4c3cb62ca61a723/diff" to get inode usage: stat /var/lib/containers/storage/overlay/06d66d3c2edbd4a4c930193d2cf5754240c442388230912fd4c3cb62ca61a723/diff: no such file or directory, extraDiskErr: <nil>
	Sep 19 22:20:29 addons-497709 kubelet[1531]: E0919 22:20:29.820555    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4d4b6da5281ea4be2c9673854b52d6a9783b39187549d403d23cace2f3a0b805/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4d4b6da5281ea4be2c9673854b52d6a9783b39187549d403d23cace2f3a0b805/diff: no such file or directory, extraDiskErr: <nil>
	Sep 19 22:20:29 addons-497709 kubelet[1531]: E0919 22:20:29.821741    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/06d66d3c2edbd4a4c930193d2cf5754240c442388230912fd4c3cb62ca61a723/diff" to get inode usage: stat /var/lib/containers/storage/overlay/06d66d3c2edbd4a4c930193d2cf5754240c442388230912fd4c3cb62ca61a723/diff: no such file or directory, extraDiskErr: <nil>
	Sep 19 22:20:29 addons-497709 kubelet[1531]: E0919 22:20:29.825334    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3edb017f3d024781ba6e27a036f68093e773bf5ec818c601897c2aa2c31294fc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3edb017f3d024781ba6e27a036f68093e773bf5ec818c601897c2aa2c31294fc/diff: no such file or directory, extraDiskErr: <nil>
	Sep 19 22:20:29 addons-497709 kubelet[1531]: E0919 22:20:29.826770    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/006fc445d8bc07b5ed33f789e180b089905331e21a4e047417ecc2105319701e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/006fc445d8bc07b5ed33f789e180b089905331e21a4e047417ecc2105319701e/diff: no such file or directory, extraDiskErr: <nil>
	Sep 19 22:20:29 addons-497709 kubelet[1531]: E0919 22:20:29.860531    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d001889884a67277bc38dcc549372d01bee348d0ea7c343e57a8c34c090e0fe1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d001889884a67277bc38dcc549372d01bee348d0ea7c343e57a8c34c090e0fe1/diff: no such file or directory, extraDiskErr: <nil>
	Sep 19 22:20:29 addons-497709 kubelet[1531]: E0919 22:20:29.980260    1531 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320429979966902 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 19 22:20:29 addons-497709 kubelet[1531]: E0919 22:20:29.980296    1531 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320429979966902 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 19 22:20:30 addons-497709 kubelet[1531]: I0919 22:20:30.075623    1531 scope.go:117] "RemoveContainer" containerID="10b66d677661170d87bff63fda62acc3389abdcf9bab681419bc5b48cf393378"
	Sep 19 22:20:34 addons-497709 kubelet[1531]: I0919 22:20:34.453525    1531 scope.go:117] "RemoveContainer" containerID="6c615d95e9727e1d94698674d4d42e9445e63a5a48279cb476729b991e665b3a"
	Sep 19 22:20:34 addons-497709 kubelet[1531]: I0919 22:20:34.472714    1531 scope.go:117] "RemoveContainer" containerID="6c615d95e9727e1d94698674d4d42e9445e63a5a48279cb476729b991e665b3a"
	Sep 19 22:20:34 addons-497709 kubelet[1531]: E0919 22:20:34.473108    1531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c615d95e9727e1d94698674d4d42e9445e63a5a48279cb476729b991e665b3a\": container with ID starting with 6c615d95e9727e1d94698674d4d42e9445e63a5a48279cb476729b991e665b3a not found: ID does not exist" containerID="6c615d95e9727e1d94698674d4d42e9445e63a5a48279cb476729b991e665b3a"
	Sep 19 22:20:34 addons-497709 kubelet[1531]: I0919 22:20:34.473147    1531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c615d95e9727e1d94698674d4d42e9445e63a5a48279cb476729b991e665b3a"} err="failed to get container status \"6c615d95e9727e1d94698674d4d42e9445e63a5a48279cb476729b991e665b3a\": rpc error: code = NotFound desc = could not find container \"6c615d95e9727e1d94698674d4d42e9445e63a5a48279cb476729b991e665b3a\": container with ID starting with 6c615d95e9727e1d94698674d4d42e9445e63a5a48279cb476729b991e665b3a not found: ID does not exist"
	Sep 19 22:20:34 addons-497709 kubelet[1531]: I0919 22:20:34.514807    1531 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52bdx\" (UniqueName: \"kubernetes.io/projected/c9730761-04ca-4b32-8553-df1bbb6cb4e5-kube-api-access-52bdx\") pod \"c9730761-04ca-4b32-8553-df1bbb6cb4e5\" (UID: \"c9730761-04ca-4b32-8553-df1bbb6cb4e5\") "
	Sep 19 22:20:34 addons-497709 kubelet[1531]: I0919 22:20:34.518562    1531 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9730761-04ca-4b32-8553-df1bbb6cb4e5-kube-api-access-52bdx" (OuterVolumeSpecName: "kube-api-access-52bdx") pod "c9730761-04ca-4b32-8553-df1bbb6cb4e5" (UID: "c9730761-04ca-4b32-8553-df1bbb6cb4e5"). InnerVolumeSpecName "kube-api-access-52bdx". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 19 22:20:34 addons-497709 kubelet[1531]: I0919 22:20:34.615313    1531 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-52bdx\" (UniqueName: \"kubernetes.io/projected/c9730761-04ca-4b32-8553-df1bbb6cb4e5-kube-api-access-52bdx\") on node \"addons-497709\" DevicePath \"\""
	Sep 19 22:20:35 addons-497709 kubelet[1531]: I0919 22:20:35.683346    1531 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9730761-04ca-4b32-8553-df1bbb6cb4e5" path="/var/lib/kubelet/pods/c9730761-04ca-4b32-8553-df1bbb6cb4e5/volumes"
	Sep 19 22:20:39 addons-497709 kubelet[1531]: E0919 22:20:39.983117    1531 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320439982846548 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 19 22:20:39 addons-497709 kubelet[1531]: E0919 22:20:39.983160    1531 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320439982846548 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 19 22:20:49 addons-497709 kubelet[1531]: E0919 22:20:49.985751    1531 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320449985489655 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 19 22:20:49 addons-497709 kubelet[1531]: E0919 22:20:49.985785    1531 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320449985489655 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 19 22:20:55 addons-497709 kubelet[1531]: I0919 22:20:55.974977    1531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qxc4\" (UniqueName: \"kubernetes.io/projected/be7778d9-c39a-4877-9e24-fb899ec4d4dc-kube-api-access-9qxc4\") pod \"hello-world-app-5d498dc89-x6gtr\" (UID: \"be7778d9-c39a-4877-9e24-fb899ec4d4dc\") " pod="default/hello-world-app-5d498dc89-x6gtr"
	Sep 19 22:20:56 addons-497709 kubelet[1531]: W0919 22:20:56.251318    1531 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c9170066012a16d2cf897db78fc7a050699f6e3637af8a845a40af01faa96e1e/crio-8f34fe98ee6ef91fd8624b3a856fc1bd69c973a73aeec9c642474d525a2211c3 WatchSource:0}: Error finding container 8f34fe98ee6ef91fd8624b3a856fc1bd69c973a73aeec9c642474d525a2211c3: Status 404 returned error can't find the container with id 8f34fe98ee6ef91fd8624b3a856fc1bd69c973a73aeec9c642474d525a2211c3
	
	
	==> storage-provisioner [afc005584780a87e5c60a3a110a039ba2f5acbc4dc4c9426ce03c92462f763b2] <==
	W0919 22:20:33.189862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:35.193530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:35.198451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:37.202038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:37.206605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:39.209051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:39.213396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:41.216941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:41.223722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:43.226755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:43.231578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:45.234613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:45.242515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:47.245186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:47.249931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:49.252671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:49.257577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:51.260626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:51.264844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:53.267912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:53.274844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:55.277632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:55.282046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:57.286601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:20:57.296369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-497709 -n addons-497709
helpers_test.go:269: (dbg) Run:  kubectl --context addons-497709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-ncdqb ingress-nginx-admission-patch-cdqhf
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-497709 describe pod ingress-nginx-admission-create-ncdqb ingress-nginx-admission-patch-cdqhf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-497709 describe pod ingress-nginx-admission-create-ncdqb ingress-nginx-admission-patch-cdqhf: exit status 1 (91.014963ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ncdqb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cdqhf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-497709 describe pod ingress-nginx-admission-create-ncdqb ingress-nginx-admission-patch-cdqhf: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-497709 addons disable ingress-dns --alsologtostderr -v=1: (1.330421676s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-497709 addons disable ingress --alsologtostderr -v=1: (7.812625404s)
--- FAIL: TestAddons/parallel/Ingress (153.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-995015 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-995015 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-xpphs" [659838f9-2d53-4fd1-8781-cdd87f2d3202] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0919 22:27:40.667961    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:08.369582    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:32:40.667806    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-995015 -n functional-995015
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-19 22:35:36.199257504 +0000 UTC m=+1275.110495831
functional_test.go:1645: (dbg) Run:  kubectl --context functional-995015 describe po hello-node-connect-7d85dfc575-xpphs -n default
functional_test.go:1645: (dbg) kubectl --context functional-995015 describe po hello-node-connect-7d85dfc575-xpphs -n default:
Name:             hello-node-connect-7d85dfc575-xpphs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-995015/192.168.49.2
Start Time:       Fri, 19 Sep 2025 22:25:35 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r5d25 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-r5d25:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xpphs to functional-995015
Normal   Pulling    7m21s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m21s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m21s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-995015 logs hello-node-connect-7d85dfc575-xpphs -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-995015 logs hello-node-connect-7d85dfc575-xpphs -n default: exit status 1 (83.600767ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-xpphs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-995015 logs hello-node-connect-7d85dfc575-xpphs -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-995015 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-xpphs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-995015/192.168.49.2
Start Time:       Fri, 19 Sep 2025 22:25:35 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r5d25 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-r5d25:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xpphs to functional-995015
Normal   Pulling    7m21s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m21s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m21s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-995015 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-995015 logs -l app=hello-node-connect: exit status 1 (90.251731ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-xpphs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-995015 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-995015 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.221.163
IPs:                      10.97.221.163
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32227/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-995015
helpers_test.go:243: (dbg) docker inspect functional-995015:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0",
	        "Created": "2025-09-19T22:22:14.246587539Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 22762,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:22:14.320203896Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/hosts",
	        "LogPath": "/var/lib/docker/containers/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0-json.log",
	        "Name": "/functional-995015",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-995015:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-995015",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0",
	                "LowerDir": "/var/lib/docker/overlay2/7e4a2746dd8cdcddc71dad110a1828c0024588eeb05bad65b38d31656dab175d-init/diff:/var/lib/docker/overlay2/7a5d5014689cfdaab77901928a3123965a103b6cffc2baf102de2c2f246b4108/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e4a2746dd8cdcddc71dad110a1828c0024588eeb05bad65b38d31656dab175d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e4a2746dd8cdcddc71dad110a1828c0024588eeb05bad65b38d31656dab175d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e4a2746dd8cdcddc71dad110a1828c0024588eeb05bad65b38d31656dab175d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-995015",
	                "Source": "/var/lib/docker/volumes/functional-995015/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-995015",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-995015",
	                "name.minikube.sigs.k8s.io": "functional-995015",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "489f29e786e2b96b26b77b6829d4a73d972c67c2514958900cc38b4b94e76a9b",
	            "SandboxKey": "/var/run/docker/netns/489f29e786e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-995015": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:46:73:cf:9a:4d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ac2cea7876f40df00df0fcae277017d1ec38cee2c7c0a5c377dffa017045325",
	                    "EndpointID": "89bb00e636ab7f82f23f8bfe71fdf87a4b3f34e43b39c81b3987fa954942957e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-995015",
	                        "6d013edf8c00"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-995015 -n functional-995015
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-995015 logs -n 25: (1.898153383s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-995015 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ ssh            │ functional-995015 ssh -- ls -la /mount-9p                                                                          │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ ssh            │ functional-995015 ssh sudo umount -f /mount-9p                                                                     │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ ssh            │ functional-995015 ssh findmnt -T /mount1                                                                           │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ mount          │ -p functional-995015 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1063581786/001:/mount3 --alsologtostderr -v=1 │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ mount          │ -p functional-995015 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1063581786/001:/mount1 --alsologtostderr -v=1 │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ mount          │ -p functional-995015 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1063581786/001:/mount2 --alsologtostderr -v=1 │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ ssh            │ functional-995015 ssh findmnt -T /mount1                                                                           │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ ssh            │ functional-995015 ssh findmnt -T /mount2                                                                           │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ ssh            │ functional-995015 ssh findmnt -T /mount3                                                                           │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ mount          │ -p functional-995015 --kill=true                                                                                   │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ start          │ -p functional-995015 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ start          │ -p functional-995015 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ start          │ -p functional-995015 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-995015 --alsologtostderr -v=1                                                     │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ update-context │ functional-995015 update-context --alsologtostderr -v=2                                                            │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ update-context │ functional-995015 update-context --alsologtostderr -v=2                                                            │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ update-context │ functional-995015 update-context --alsologtostderr -v=2                                                            │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ image          │ functional-995015 image ls --format short --alsologtostderr                                                        │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ image          │ functional-995015 image ls --format yaml --alsologtostderr                                                         │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ ssh            │ functional-995015 ssh pgrep buildkitd                                                                              │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ image          │ functional-995015 image build -t localhost/my-image:functional-995015 testdata/build --alsologtostderr             │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ image          │ functional-995015 image ls                                                                                         │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ image          │ functional-995015 image ls --format json --alsologtostderr                                                         │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ image          │ functional-995015 image ls --format table --alsologtostderr                                                        │ functional-995015 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:35:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:35:18.291814   35410 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:35:18.292025   35410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:35:18.292047   35410 out.go:374] Setting ErrFile to fd 2...
	I0919 22:35:18.292067   35410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:35:18.292458   35410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
	I0919 22:35:18.292879   35410 out.go:368] Setting JSON to false
	I0919 22:35:18.293741   35410 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4669,"bootTime":1758316649,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 22:35:18.293838   35410 start.go:140] virtualization:  
	I0919 22:35:18.297107   35410 out.go:179] * [functional-995015] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0919 22:35:18.300897   35410 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:35:18.300982   35410 notify.go:220] Checking for updates...
	I0919 22:35:18.306970   35410 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:35:18.309808   35410 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	I0919 22:35:18.312727   35410 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	I0919 22:35:18.315568   35410 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 22:35:18.318430   35410 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:35:18.322043   35410 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:35:18.322669   35410 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:35:18.351314   35410 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0919 22:35:18.351434   35410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:35:18.412053   35410 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-19 22:35:18.403101505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 22:35:18.412161   35410 docker.go:318] overlay module found
	I0919 22:35:18.415418   35410 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0919 22:35:18.418357   35410 start.go:304] selected driver: docker
	I0919 22:35:18.418389   35410 start.go:918] validating driver "docker" against &{Name:functional-995015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-995015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:35:18.418574   35410 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:35:18.422089   35410 out.go:203] 
	W0919 22:35:18.424959   35410 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 22:35:18.427820   35410 out.go:203] 
	
	
	==> CRI-O <==
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.045990920Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e580c6ce-b5d9-4e36-b4c9-8535ff96e887 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.047130623Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf],Size_:247562353,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=e580c6ce-b5d9-4e36-b4c9-8535ff96e887 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.048332153Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=477f2304-c849-4442-9e1d-d8426d43f8a8 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.049322438Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf],Size_:247562353,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=477f2304-c849-4442-9e1d-d8426d43f8a8 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.049486847Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=feff2309-7e82-47a4-b484-a8c20a017de9 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.050746732Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.056162526Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xx8vl/kubernetes-dashboard" id=ff6f0fae-b7db-4b37-8f6d-709f690b12b4 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.056268013Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.078937154Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7fa4c0be9d7067d2937d5492882b609c3f5d49203adc422260d0c6a8e36298aa/merged/etc/group: no such file or directory"
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.122810665Z" level=info msg="Created container 9536a185043c3f33d871cd1cd74920f389dde3af30ca3287fbf40a2fe5f451bf: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xx8vl/kubernetes-dashboard" id=ff6f0fae-b7db-4b37-8f6d-709f690b12b4 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.123773397Z" level=info msg="Starting container: 9536a185043c3f33d871cd1cd74920f389dde3af30ca3287fbf40a2fe5f451bf" id=f36b17ec-1a11-4197-8f89-7123be415399 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.132868054Z" level=info msg="Started container" PID=7635 containerID=9536a185043c3f33d871cd1cd74920f389dde3af30ca3287fbf40a2fe5f451bf description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xx8vl/kubernetes-dashboard id=f36b17ec-1a11-4197-8f89-7123be415399 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d051c95fffa9d47b4abb9f620c0a601e2f9c677f2e02266bd12e973ed2f03481
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.293183133Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 19 22:35:24 functional-995015 crio[4313]: time="2025-09-19 22:35:24.647445042Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\", expecting one of \"linux+arm64\""
	Sep 19 22:35:25 functional-995015 crio[4313]: time="2025-09-19 22:35:25.549279852Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=feff2309-7e82-47a4-b484-a8c20a017de9 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:35:25 functional-995015 crio[4313]: time="2025-09-19 22:35:25.549872141Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=852d4df6-8d71-439a-a948-131b4074c450 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:25 functional-995015 crio[4313]: time="2025-09-19 22:35:25.550871206Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a],Size_:42263767,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=852d4df6-8d71-439a-a948-131b4074c450 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:25 functional-995015 crio[4313]: time="2025-09-19 22:35:25.551862886Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=d129baaf-59eb-4d8d-85d1-dfb2ab73b606 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:25 functional-995015 crio[4313]: time="2025-09-19 22:35:25.552741333Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a],Size_:42263767,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=d129baaf-59eb-4d8d-85d1-dfb2ab73b606 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:25 functional-995015 crio[4313]: time="2025-09-19 22:35:25.557742408Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7lgrv/dashboard-metrics-scraper" id=85577edd-3c88-46c8-a2cb-bc4d873840f8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:25 functional-995015 crio[4313]: time="2025-09-19 22:35:25.557849347Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:35:25 functional-995015 crio[4313]: time="2025-09-19 22:35:25.577111761Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bee89856a6befd82d30a6a22934dbe69d6721d99d6576f31284d967e669d0532/merged/etc/group: no such file or directory"
	Sep 19 22:35:25 functional-995015 crio[4313]: time="2025-09-19 22:35:25.631145186Z" level=info msg="Created container 7208f1a46566cabf204b03dd7703f661e425059d4af4400f8b930e047d036994: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7lgrv/dashboard-metrics-scraper" id=85577edd-3c88-46c8-a2cb-bc4d873840f8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:25 functional-995015 crio[4313]: time="2025-09-19 22:35:25.631938290Z" level=info msg="Starting container: 7208f1a46566cabf204b03dd7703f661e425059d4af4400f8b930e047d036994" id=7f46eb43-2528-4bc0-8035-5989c9df32de name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:35:25 functional-995015 crio[4313]: time="2025-09-19 22:35:25.644800156Z" level=info msg="Started container" PID=7686 containerID=7208f1a46566cabf204b03dd7703f661e425059d4af4400f8b930e047d036994 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7lgrv/dashboard-metrics-scraper id=7f46eb43-2528-4bc0-8035-5989c9df32de name=/runtime.v1.RuntimeService/StartContainer sandboxID=19291f077946c0b55159bb97de02271ededeb4d785abde545d08be82f766e93f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	7208f1a46566c       docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   12 seconds ago      Running             dashboard-metrics-scraper   0                   19291f077946c       dashboard-metrics-scraper-77bf4d6c4c-7lgrv
	9536a185043c3       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         13 seconds ago      Running             kubernetes-dashboard        0                   d051c95fffa9d       kubernetes-dashboard-855c9754f9-xx8vl
	4cd07ae9c1537       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              26 seconds ago      Exited              mount-munger                0                   43d4c72bd0b13       busybox-mount
	7854e723eb772       docker.io/library/nginx@sha256:059ceb5a1ded7032703d6290061911adc8a9c55298f372daaf63801600ec894e                  10 minutes ago      Running             myfrontend                  0                   74e2fe4c22f64       sp-pod
	2433bec93f1e5       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                  10 minutes ago      Running             nginx                       0                   7d0566d259600       nginx-svc
	27bad4d792efa       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Running             kindnet-cni                 2                   864f9da348246       kindnet-x9tlz
	de3bf8dbcfcbc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     2                   b3365be8dddb7       coredns-66bc5c9577-phcnt
	130a41933411a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Running             storage-provisioner         3                   cd934033b8b74       storage-provisioner
	58fc27b8e8e8d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     2                   8fe1a6229a072       coredns-66bc5c9577-jknbh
	10442445ae722       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                 11 minutes ago      Running             kube-proxy                  2                   a5b698cce2e52       kube-proxy-lctb9
	dd3d425e01166       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                                 11 minutes ago      Running             kube-apiserver              0                   ec02f76cf0562       kube-apiserver-functional-995015
	5c16ad70224bf       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                 11 minutes ago      Running             kube-controller-manager     2                   a9bb81bccbbdb       kube-controller-manager-functional-995015
	c3370e78fbf53       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                 11 minutes ago      Running             kube-scheduler              2                   a51c6e770f163       kube-scheduler-functional-995015
	6ecf34df55079       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Running             etcd                        2                   c95da640e7d6c       etcd-functional-995015
	9569e8b96f710       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Exited              storage-provisioner         2                   cd934033b8b74       storage-provisioner
	8652d712ac061       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Exited              coredns                     1                   8fe1a6229a072       coredns-66bc5c9577-jknbh
	e72b6fd6a1a38       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Exited              etcd                        1                   c95da640e7d6c       etcd-functional-995015
	f80555037bbdf       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                 11 minutes ago      Exited              kube-proxy                  1                   a5b698cce2e52       kube-proxy-lctb9
	4545b18cb5477       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Exited              kindnet-cni                 1                   864f9da348246       kindnet-x9tlz
	fca5042f5a88d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Exited              coredns                     1                   b3365be8dddb7       coredns-66bc5c9577-phcnt
	e2ab57e91c079       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                 11 minutes ago      Exited              kube-scheduler              1                   a51c6e770f163       kube-scheduler-functional-995015
	e9c0788e3f1bc       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                 11 minutes ago      Exited              kube-controller-manager     1                   a9bb81bccbbdb       kube-controller-manager-functional-995015
	
	
	==> coredns [58fc27b8e8e8dd8ae7cf0cd0ad6fdfde7e5a89db89a6413fad2cfdbca52b644c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50926 - 35459 "HINFO IN 110835052069938100.7956655309079841465. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027568841s
	
	
	==> coredns [8652d712ac0612d1875b9520ed9cd813f663bf113f4cbde6ef25cc4f7730564e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59624 - 7108 "HINFO IN 2443212735569650548.5042080676715875833. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039527794s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [de3bf8dbcfcbc818b14a72a82f444acad3ea098b3a9d089a3f92be08312bb474] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34957 - 41322 "HINFO IN 2450244956043180008.7767901465282363877. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.054440959s
	
	
	==> coredns [fca5042f5a88d2c483db8f5faa9742118fcf819fd6a82068c1b6d0e3806585c4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44816 - 40787 "HINFO IN 8186373618231448946.4908617022323752416. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005338587s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-995015
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-995015
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=functional-995015
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_22_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:22:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-995015
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:35:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:32 +0000   Fri, 19 Sep 2025 22:22:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:32 +0000   Fri, 19 Sep 2025 22:22:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:32 +0000   Fri, 19 Sep 2025 22:22:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:32 +0000   Fri, 19 Sep 2025 22:23:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-995015
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 1aedaf6f5dae41dd894c3e0210104da5
	  System UUID:                8db80ec5-4e65-4bda-9094-b95e2aacf8b8
	  Boot ID:                    7b79e4aa-7121-473c-883d-bf6a4cc4983e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hk2tb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-xpphs           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-jknbh                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 coredns-66bc5c9577-phcnt                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-995015                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-x9tlz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-995015              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-995015     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-lctb9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-995015              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-7lgrv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xx8vl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-995015 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-995015 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-995015 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-995015 event: Registered Node functional-995015 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-995015 status is now: NodeReady
	  Warning  ContainerGCFailed        12m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-995015 event: Registered Node functional-995015 in Controller
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-995015 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-995015 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-995015 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-995015 event: Registered Node functional-995015 in Controller
	
	
	==> dmesg <==
	[Sep19 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015869] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.460692] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026277] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.762371] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.311227] kauditd_printk_skb: 36 callbacks suppressed
	[Sep19 22:35] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [6ecf34df55079d326c792c81752150f65e2f5e75a255c9d7cc5cb9d0ce051912] <==
	{"level":"warn","ts":"2025-09-19T22:24:26.897323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:26.930034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:26.958181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.009038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.050590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.088821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.121564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.149566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.177070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.199351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.232357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.262382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.317740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.319173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.350702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.371925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.401861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.424429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.451172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.483639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.490440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:27.592218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41874","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:34:25.841237Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1106}
	{"level":"info","ts":"2025-09-19T22:34:25.864817Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1106,"took":"23.20921ms","hash":3183268269,"current-db-size-bytes":3207168,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1388544,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-19T22:34:25.864872Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3183268269,"revision":1106,"compact-revision":-1}
	
	
	==> etcd [e72b6fd6a1a382122aaea4f02c2c4d2b06561c4dacd0e5b8d209ac998701bc10] <==
	{"level":"warn","ts":"2025-09-19T22:23:44.136914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:23:44.159242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:23:44.184932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:23:44.213883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:23:44.239029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:23:44.269190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:23:44.369343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55932","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:24:07.845034Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-19T22:24:07.845110Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-995015","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-19T22:24:07.845196Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-19T22:24:07.994016Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-19T22:24:07.994065Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:24:07.994110Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-19T22:24:07.994232Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-19T22:24:07.994289Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:24:07.994312Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-19T22:24:07.994322Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-19T22:24:07.994219Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:24:07.994335Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-19T22:24:07.994341Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:24:07.994300Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-19T22:24:07.998224Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-19T22:24:07.998712Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:24:07.998758Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-19T22:24:07.998766Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-995015","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:35:38 up  1:18,  0 users,  load average: 0.88, 0.50, 0.53
	Linux functional-995015 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [27bad4d792efaf631f4806b2a996627e18c94fc6e84e2b733ecbcc10a57045c8] <==
	I0919 22:33:29.514940       1 main.go:301] handling current node
	I0919 22:33:39.514992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:39.515027       1 main.go:301] handling current node
	I0919 22:33:49.515538       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:49.515571       1 main.go:301] handling current node
	I0919 22:33:59.514831       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:59.514869       1 main.go:301] handling current node
	I0919 22:34:09.515596       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:09.515629       1 main.go:301] handling current node
	I0919 22:34:19.514950       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:19.514983       1 main.go:301] handling current node
	I0919 22:34:29.514884       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:29.514999       1 main.go:301] handling current node
	I0919 22:34:39.514860       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:39.514897       1 main.go:301] handling current node
	I0919 22:34:49.515716       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:49.515752       1 main.go:301] handling current node
	I0919 22:34:59.514825       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:59.515179       1 main.go:301] handling current node
	I0919 22:35:09.515548       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:09.515610       1 main.go:301] handling current node
	I0919 22:35:19.518335       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:19.518459       1 main.go:301] handling current node
	I0919 22:35:29.514873       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:29.514903       1 main.go:301] handling current node
	
	
	==> kindnet [4545b18cb5477dce239950fbbe40a730657b2b568eb9c9f2f6c82fe57f1db4e1] <==
	I0919 22:23:39.845088       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 22:23:39.874576       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0919 22:23:39.874892       1 main.go:148] setting mtu 1500 for CNI 
	I0919 22:23:39.890872       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 22:23:39.901615       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T22:23:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 22:23:40.117887       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 22:23:40.117965       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 22:23:40.117999       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 22:23:40.119222       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0919 22:23:40.119622       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0919 22:23:40.119779       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 22:23:40.119912       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 22:23:40.120063       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0919 22:23:45.621370       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 22:23:45.621491       1 metrics.go:72] Registering metrics
	I0919 22:23:45.621606       1 controller.go:711] "Syncing nftables rules"
	I0919 22:23:50.118216       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:23:50.118295       1 main.go:301] handling current node
	I0919 22:24:00.119008       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:24:00.119111       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dd3d425e01166f8bae58cdb216bc2fde2cf106be13141c3e49da886d3e4ef43e] <==
	I0919 22:25:02.521119       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.199.232"}
	E0919 22:25:28.405477       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40376: use of closed network connection
	I0919 22:25:33.638696       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:25:35.506828       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39738: use of closed network connection
	I0919 22:25:35.859713       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.221.163"}
	I0919 22:25:43.172655       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:36.175381       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:47.687221       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:56.939338       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:03.118955       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:03.654570       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:19.401702       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:24.920354       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:27.478792       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:32.154150       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:47.648608       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:44.955133       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:07.882725       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:57.343896       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:21.895119       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:28.391013       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:35:11.483474       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:19.462065       1 controller.go:667] quota admission added evaluator for: namespaces
	I0919 22:35:19.777511       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.215.76"}
	I0919 22:35:19.803075       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.55.77"}
	
	
	==> kube-controller-manager [5c16ad70224bf41b932e4f7bf84483008b7dd44892dec13c0be66c464481aaee] <==
	I0919 22:24:31.951312       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:24:31.951941       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:24:31.954034       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:31.963234       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:31.963333       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:31.963364       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 22:24:31.963472       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:31.963895       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:24:31.964041       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 22:24:31.964079       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:24:31.964183       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0919 22:24:31.964518       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0919 22:24:31.966019       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:24:31.974501       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:24:31.974666       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:31.974736       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:31.980792       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 22:24:31.983408       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:31.989885       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	E0919 22:35:19.624437       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:35:19.628180       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:35:19.643378       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:35:19.643815       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:35:19.664757       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:35:19.665116       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e9c0788e3f1bc644536f53cb315919ae8c4dd9095375eee6ec54a4281be0654b] <==
	I0919 22:23:48.769860       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:23:48.772491       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:23:48.772986       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 22:23:48.773078       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 22:23:48.773144       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 22:23:48.773182       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 22:23:48.773226       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 22:23:48.775080       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:23:48.777331       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:23:48.777425       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:23:48.779875       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0919 22:23:48.788184       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:23:48.798638       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 22:23:48.798678       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:23:48.798649       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 22:23:48.798735       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:23:48.798781       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:23:48.798892       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:23:48.799010       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-995015"
	I0919 22:23:48.799088       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:23:48.799309       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:23:48.799124       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:23:48.799113       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:23:48.801792       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:23:51.253972       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [10442445ae72208974f6d00a17874718ccdb6caf2b96c699a21b26e304182962] <==
	I0919 22:24:29.252628       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:29.673953       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:29.789180       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:29.789304       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:29.789428       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:29.921558       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:29.921687       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:29.927036       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:29.927401       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:29.927613       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:29.930920       1 config.go:200] "Starting service config controller"
	I0919 22:24:29.930987       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:29.931052       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:29.931080       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:29.931118       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:29.931147       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:29.931911       1 config.go:309] "Starting node config controller"
	I0919 22:24:29.931981       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:29.932014       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:30.047417       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:24:30.047475       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:30.047528       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f80555037bbdfee2708c61a40cd144bcbf8e1c1a3e283c4b8f3d968a3e6c7228] <==
	I0919 22:23:43.967138       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:23:44.370172       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:23:45.682951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:23:45.682987       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:23:45.683100       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:23:45.797316       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:23:45.797441       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:23:45.819823       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:23:45.820121       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:23:45.820146       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:23:45.826786       1 config.go:200] "Starting service config controller"
	I0919 22:23:45.844503       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:23:45.845024       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:23:45.845948       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:23:45.845307       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:23:45.846030       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:23:45.846116       1 config.go:309] "Starting node config controller"
	I0919 22:23:45.846123       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:23:45.846129       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:23:45.948407       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:23:45.948417       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:23:45.948491       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c3370e78fbf53042eff28a150d38a695940215a3a6e686179a1bb5e76eace1dc] <==
	I0919 22:24:28.941055       1 serving.go:386] Generated self-signed cert in-memory
	I0919 22:24:30.459031       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:24:30.459143       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:30.464797       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:24:30.464909       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0919 22:24:30.464931       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0919 22:24:30.464955       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:24:30.465911       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:30.465926       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:30.465941       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:30.465947       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:30.569559       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:30.569780       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0919 22:24:30.569900       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e2ab57e91c079fd8febca76bc3a2c977317ea9d0060a0fe7be0f75cc56bbaa58] <==
	E0919 22:23:45.454655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:23:45.454846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 22:23:45.454903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:23:45.454952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:23:45.454993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 22:23:45.455032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:23:45.455505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:23:45.455655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:23:45.455782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:23:45.455898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:23:45.456318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:23:45.518685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 22:23:45.519005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:23:45.519104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:23:45.519234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:23:45.519361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 22:23:45.519534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:23:45.519682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:23:46.694565       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:07.851344       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0919 22:24:07.851536       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0919 22:24:07.851703       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 22:24:07.851791       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:07.852552       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0919 22:24:07.852788       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.890510    4603 manager.go:1116] Failed to create existing container: /crio-a9bb81bccbbdb65bfddc7e8ae0372d98cc321aff66a0e4c1b90d113f40493d29: Error finding container a9bb81bccbbdb65bfddc7e8ae0372d98cc321aff66a0e4c1b90d113f40493d29: Status 404 returned error can't find the container with id a9bb81bccbbdb65bfddc7e8ae0372d98cc321aff66a0e4c1b90d113f40493d29
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.891115    4603 manager.go:1116] Failed to create existing container: /crio-e73f61e3c2af3d2535a668a391379c7247b350c719ce006e2f4ed6432c3e2a57: Error finding container e73f61e3c2af3d2535a668a391379c7247b350c719ce006e2f4ed6432c3e2a57: Status 404 returned error can't find the container with id e73f61e3c2af3d2535a668a391379c7247b350c719ce006e2f4ed6432c3e2a57
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.891329    4603 manager.go:1116] Failed to create existing container: /crio-a5b698cce2e526eaeed4f5703f59c81599e2845623c5c84e447a31bc0d55a172: Error finding container a5b698cce2e526eaeed4f5703f59c81599e2845623c5c84e447a31bc0d55a172: Status 404 returned error can't find the container with id a5b698cce2e526eaeed4f5703f59c81599e2845623c5c84e447a31bc0d55a172
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.891499    4603 manager.go:1116] Failed to create existing container: /docker/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/crio-e73f61e3c2af3d2535a668a391379c7247b350c719ce006e2f4ed6432c3e2a57: Error finding container e73f61e3c2af3d2535a668a391379c7247b350c719ce006e2f4ed6432c3e2a57: Status 404 returned error can't find the container with id e73f61e3c2af3d2535a668a391379c7247b350c719ce006e2f4ed6432c3e2a57
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.891728    4603 manager.go:1116] Failed to create existing container: /docker/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/crio-c95da640e7d6c125ac960f023499c34b373cc3ebdf75a22e9f7d778e10afa68e: Error finding container c95da640e7d6c125ac960f023499c34b373cc3ebdf75a22e9f7d778e10afa68e: Status 404 returned error can't find the container with id c95da640e7d6c125ac960f023499c34b373cc3ebdf75a22e9f7d778e10afa68e
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.892750    4603 manager.go:1116] Failed to create existing container: /crio-a51c6e770f163ccf442a16038b6b9c3e5f3d353bb1501cb14529ec7c5a3af0ff: Error finding container a51c6e770f163ccf442a16038b6b9c3e5f3d353bb1501cb14529ec7c5a3af0ff: Status 404 returned error can't find the container with id a51c6e770f163ccf442a16038b6b9c3e5f3d353bb1501cb14529ec7c5a3af0ff
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.893985    4603 manager.go:1116] Failed to create existing container: /docker/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/crio-a9bb81bccbbdb65bfddc7e8ae0372d98cc321aff66a0e4c1b90d113f40493d29: Error finding container a9bb81bccbbdb65bfddc7e8ae0372d98cc321aff66a0e4c1b90d113f40493d29: Status 404 returned error can't find the container with id a9bb81bccbbdb65bfddc7e8ae0372d98cc321aff66a0e4c1b90d113f40493d29
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.894534    4603 manager.go:1116] Failed to create existing container: /docker/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/crio-cd934033b8b749e8fbf39dc50923545f4ae0462a02534662bd3707252a953339: Error finding container cd934033b8b749e8fbf39dc50923545f4ae0462a02534662bd3707252a953339: Status 404 returned error can't find the container with id cd934033b8b749e8fbf39dc50923545f4ae0462a02534662bd3707252a953339
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.894839    4603 manager.go:1116] Failed to create existing container: /crio-8fe1a6229a07294d57fdfcba86434009401d86750a7200d3f47c98a3c5f462ff: Error finding container 8fe1a6229a07294d57fdfcba86434009401d86750a7200d3f47c98a3c5f462ff: Status 404 returned error can't find the container with id 8fe1a6229a07294d57fdfcba86434009401d86750a7200d3f47c98a3c5f462ff
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.895065    4603 manager.go:1116] Failed to create existing container: /docker/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/crio-864f9da3482461047e72d6ddaae0fccf9c2d781252e6aaeefb58191a208a6538: Error finding container 864f9da3482461047e72d6ddaae0fccf9c2d781252e6aaeefb58191a208a6538: Status 404 returned error can't find the container with id 864f9da3482461047e72d6ddaae0fccf9c2d781252e6aaeefb58191a208a6538
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.895325    4603 manager.go:1116] Failed to create existing container: /crio-864f9da3482461047e72d6ddaae0fccf9c2d781252e6aaeefb58191a208a6538: Error finding container 864f9da3482461047e72d6ddaae0fccf9c2d781252e6aaeefb58191a208a6538: Status 404 returned error can't find the container with id 864f9da3482461047e72d6ddaae0fccf9c2d781252e6aaeefb58191a208a6538
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.896782    4603 manager.go:1116] Failed to create existing container: /docker/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/crio-a51c6e770f163ccf442a16038b6b9c3e5f3d353bb1501cb14529ec7c5a3af0ff: Error finding container a51c6e770f163ccf442a16038b6b9c3e5f3d353bb1501cb14529ec7c5a3af0ff: Status 404 returned error can't find the container with id a51c6e770f163ccf442a16038b6b9c3e5f3d353bb1501cb14529ec7c5a3af0ff
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.897162    4603 manager.go:1116] Failed to create existing container: /crio-b3365be8dddb7113da0bdec2c0145596c804acf5e278d56b15be3d6bb532ce86: Error finding container b3365be8dddb7113da0bdec2c0145596c804acf5e278d56b15be3d6bb532ce86: Status 404 returned error can't find the container with id b3365be8dddb7113da0bdec2c0145596c804acf5e278d56b15be3d6bb532ce86
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.897449    4603 manager.go:1116] Failed to create existing container: /docker/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/crio-a5b698cce2e526eaeed4f5703f59c81599e2845623c5c84e447a31bc0d55a172: Error finding container a5b698cce2e526eaeed4f5703f59c81599e2845623c5c84e447a31bc0d55a172: Status 404 returned error can't find the container with id a5b698cce2e526eaeed4f5703f59c81599e2845623c5c84e447a31bc0d55a172
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.900439    4603 manager.go:1116] Failed to create existing container: /docker/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/crio-8fe1a6229a07294d57fdfcba86434009401d86750a7200d3f47c98a3c5f462ff: Error finding container 8fe1a6229a07294d57fdfcba86434009401d86750a7200d3f47c98a3c5f462ff: Status 404 returned error can't find the container with id 8fe1a6229a07294d57fdfcba86434009401d86750a7200d3f47c98a3c5f462ff
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.904731    4603 manager.go:1116] Failed to create existing container: /docker/6d013edf8c00d46958b167477b3abd91ccc597551a86171813baadf795aa40c0/crio-b3365be8dddb7113da0bdec2c0145596c804acf5e278d56b15be3d6bb532ce86: Error finding container b3365be8dddb7113da0bdec2c0145596c804acf5e278d56b15be3d6bb532ce86: Status 404 returned error can't find the container with id b3365be8dddb7113da0bdec2c0145596c804acf5e278d56b15be3d6bb532ce86
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.905809    4603 manager.go:1116] Failed to create existing container: /crio-cd934033b8b749e8fbf39dc50923545f4ae0462a02534662bd3707252a953339: Error finding container cd934033b8b749e8fbf39dc50923545f4ae0462a02534662bd3707252a953339: Status 404 returned error can't find the container with id cd934033b8b749e8fbf39dc50923545f4ae0462a02534662bd3707252a953339
	Sep 19 22:35:23 functional-995015 kubelet[4603]: E0919 22:35:23.906245    4603 manager.go:1116] Failed to create existing container: /crio-c95da640e7d6c125ac960f023499c34b373cc3ebdf75a22e9f7d778e10afa68e: Error finding container c95da640e7d6c125ac960f023499c34b373cc3ebdf75a22e9f7d778e10afa68e: Status 404 returned error can't find the container with id c95da640e7d6c125ac960f023499c34b373cc3ebdf75a22e9f7d778e10afa68e
	Sep 19 22:35:24 functional-995015 kubelet[4603]: E0919 22:35:24.034564    4603 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321324034095758 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:246740} inodes_used:{value:106}}"
	Sep 19 22:35:24 functional-995015 kubelet[4603]: E0919 22:35:24.034608    4603 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321324034095758 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:246740} inodes_used:{value:106}}"
	Sep 19 22:35:26 functional-995015 kubelet[4603]: I0919 22:35:26.479707    4603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xx8vl" podStartSLOduration=3.493457514 podStartE2EDuration="7.479685706s" podCreationTimestamp="2025-09-19 22:35:19 +0000 UTC" firstStartedPulling="2025-09-19 22:35:20.061255134 +0000 UTC m=+656.518568239" lastFinishedPulling="2025-09-19 22:35:24.047483326 +0000 UTC m=+660.504796431" observedRunningTime="2025-09-19 22:35:24.478600107 +0000 UTC m=+660.935913212" watchObservedRunningTime="2025-09-19 22:35:26.479685706 +0000 UTC m=+662.936998811"
	Sep 19 22:35:29 functional-995015 kubelet[4603]: E0919 22:35:29.732537    4603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-hk2tb" podUID="26a52a69-ea86-4203-bdb3-5aaaa2b48ff7"
	Sep 19 22:35:34 functional-995015 kubelet[4603]: E0919 22:35:34.036519    4603 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321334036268324 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:282090} inodes_used:{value:128}}"
	Sep 19 22:35:34 functional-995015 kubelet[4603]: E0919 22:35:34.036559    4603 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321334036268324 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:282090} inodes_used:{value:128}}"
	Sep 19 22:35:35 functional-995015 kubelet[4603]: E0919 22:35:35.732132    4603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-xpphs" podUID="659838f9-2d53-4fd1-8781-cdd87f2d3202"
	
	
	==> kubernetes-dashboard [9536a185043c3f33d871cd1cd74920f389dde3af30ca3287fbf40a2fe5f451bf] <==
	2025/09/19 22:35:24 Using namespace: kubernetes-dashboard
	2025/09/19 22:35:24 Using in-cluster config to connect to apiserver
	2025/09/19 22:35:24 Using secret token for csrf signing
	2025/09/19 22:35:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 22:35:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 22:35:24 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 22:35:24 Generating JWE encryption key
	2025/09/19 22:35:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 22:35:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 22:35:24 Initializing JWE encryption key from synchronized object
	2025/09/19 22:35:24 Creating in-cluster Sidecar client
	2025/09/19 22:35:24 Serving insecurely on HTTP port: 9090
	2025/09/19 22:35:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 22:35:24 Starting overwatch
	
	
	==> storage-provisioner [130a41933411af26ae79d98ca8b9ed9e997f6179d03745b46d4297833e1e1c09] <==
	W0919 22:35:13.937411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:15.940194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:15.948477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:17.951710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:17.956094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:19.958789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:19.963422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:21.978578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:21.995132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:23.999926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:24.008367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:26.011342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:26.015890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:28.020087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:28.027404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:30.032452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:30.043369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:32.052048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:32.065048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:34.068927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:34.076206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:36.080187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:36.091075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:38.106909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:38.115629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9569e8b96f7105b021ab89c8cbd3ca63140a17a83908dba6e65bda365df73bfc] <==
	I0919 22:23:51.009658       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 22:23:51.027429       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 22:23:51.027559       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0919 22:23:51.030115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:23:54.485648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:23:58.746634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:24:02.344575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:24:05.398709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-995015 -n functional-995015
helpers_test.go:269: (dbg) Run:  kubectl --context functional-995015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hk2tb hello-node-connect-7d85dfc575-xpphs
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-995015 describe pod busybox-mount hello-node-75c85bcc94-hk2tb hello-node-connect-7d85dfc575-xpphs
helpers_test.go:290: (dbg) kubectl --context functional-995015 describe pod busybox-mount hello-node-75c85bcc94-hk2tb hello-node-connect-7d85dfc575-xpphs:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-995015/192.168.49.2
	Start Time:       Fri, 19 Sep 2025 22:35:06 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://4cd07ae9c153702ca7d1fbae5cf22cb7d50eea6a677cceff24d5d426a0236668
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Sep 2025 22:35:11 +0000
	      Finished:     Fri, 19 Sep 2025 22:35:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n2k48 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-n2k48:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  32s   default-scheduler  Successfully assigned default/busybox-mount to functional-995015
	  Normal  Pulling    32s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     28s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.907s (3.907s including waiting). Image size: 3774172 bytes.
	  Normal  Created    28s   kubelet            Created container: mount-munger
	  Normal  Started    28s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-hk2tb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-995015/192.168.49.2
	Start Time:       Fri, 19 Sep 2025 22:24:56 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-44rcr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-44rcr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hk2tb to functional-995015
	  Normal   Pulling    7m35s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m35s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m35s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    37s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     37s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-xpphs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-995015/192.168.49.2
	Start Time:       Fri, 19 Sep 2025 22:25:35 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r5d25 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r5d25:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xpphs to functional-995015
	  Normal   Pulling    7m24s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m24s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m24s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m3s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-995015 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-995015 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hk2tb" [26a52a69-ea86-4203-bdb3-5aaaa2b48ff7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-995015 -n functional-995015
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-19 22:34:57.22882497 +0000 UTC m=+1236.140063296
functional_test.go:1460: (dbg) Run:  kubectl --context functional-995015 describe po hello-node-75c85bcc94-hk2tb -n default
functional_test.go:1460: (dbg) kubectl --context functional-995015 describe po hello-node-75c85bcc94-hk2tb -n default:
Name:             hello-node-75c85bcc94-hk2tb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-995015/192.168.49.2
Start Time:       Fri, 19 Sep 2025 22:24:56 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-44rcr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-44rcr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hk2tb to functional-995015
Normal   Pulling    6m53s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m53s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m53s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-995015 logs hello-node-75c85bcc94-hk2tb -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-995015 logs hello-node-75c85bcc94-hk2tb -n default: exit status 1 (95.464759ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-hk2tb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-995015 logs hello-node-75c85bcc94-hk2tb -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 service --namespace=default --https --url hello-node: exit status 115 (370.458827ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31923
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-995015 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 service hello-node --url --format={{.IP}}: exit status 115 (404.58563ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-995015 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 service hello-node --url: exit status 115 (389.556053ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31923
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-995015 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31923
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    

Test pass (293/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.11
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 6.69
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.21
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 182.52
31 TestAddons/serial/GCPAuth/Namespaces 0.25
35 TestAddons/parallel/Registry 18.08
36 TestAddons/parallel/RegistryCreds 0.73
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 5.83
41 TestAddons/parallel/CSI 55.85
42 TestAddons/parallel/Headlamp 18.72
43 TestAddons/parallel/CloudSpanner 5.58
44 TestAddons/parallel/LocalPath 52.01
45 TestAddons/parallel/NvidiaDevicePlugin 6.54
46 TestAddons/parallel/Yakd 11.76
48 TestAddons/StoppedEnableDisable 12.18
49 TestCertOptions 39.79
50 TestCertExpiration 256.7
52 TestForceSystemdFlag 35.28
53 TestForceSystemdEnv 44.1
59 TestErrorSpam/setup 29.47
60 TestErrorSpam/start 0.77
61 TestErrorSpam/status 1.14
62 TestErrorSpam/pause 1.73
63 TestErrorSpam/unpause 2.27
64 TestErrorSpam/stop 1.49
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.67
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 27.96
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.3
76 TestFunctional/serial/CacheCmd/cache/add_local 1.45
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 36.67
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.85
87 TestFunctional/serial/LogsFileCmd 1.98
88 TestFunctional/serial/InvalidService 4.97
90 TestFunctional/parallel/ConfigCmd 0.44
91 TestFunctional/parallel/DashboardCmd 8.4
92 TestFunctional/parallel/DryRun 0.47
93 TestFunctional/parallel/InternationalLanguage 0.19
94 TestFunctional/parallel/StatusCmd 1.04
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 23.78
102 TestFunctional/parallel/SSHCmd 0.55
103 TestFunctional/parallel/CpCmd 2.02
105 TestFunctional/parallel/FileSync 0.34
106 TestFunctional/parallel/CertSync 2.21
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
114 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 1.16
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.13
122 TestFunctional/parallel/ImageCommands/Setup 0.67
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.79
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.14
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.09
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.36
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ServiceCmd/List 0.5
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
151 TestFunctional/parallel/ProfileCmd/profile_list 0.42
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
153 TestFunctional/parallel/MountCmd/any-port 8.66
154 TestFunctional/parallel/MountCmd/specific-port 1.66
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.14
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 200.62
164 TestMultiControlPlane/serial/DeployApp 42.92
165 TestMultiControlPlane/serial/PingHostFromPods 1.63
166 TestMultiControlPlane/serial/AddWorkerNode 31.28
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
169 TestMultiControlPlane/serial/CopyFile 19.45
170 TestMultiControlPlane/serial/StopSecondaryNode 12.74
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
172 TestMultiControlPlane/serial/RestartSecondaryNode 33.04
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.2
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 121.55
175 TestMultiControlPlane/serial/DeleteSecondaryNode 12.21
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
177 TestMultiControlPlane/serial/StopCluster 35.69
178 TestMultiControlPlane/serial/RestartCluster 77.68
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
180 TestMultiControlPlane/serial/AddSecondaryNode 81.55
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
185 TestJSONOutput/start/Command 82.56
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.73
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.69
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.84
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 44.66
211 TestKicCustomNetwork/use_default_bridge_network 35.84
212 TestKicExistingNetwork 36.06
213 TestKicCustomSubnet 37.42
214 TestKicStaticIP 33.66
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 68.71
219 TestMountStart/serial/StartWithMountFirst 9.22
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 6.83
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.62
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.21
226 TestMountStart/serial/RestartStopped 8.23
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 133.83
231 TestMultiNode/serial/DeployApp2Nodes 6.23
232 TestMultiNode/serial/PingHostFrom2Pods 0.98
233 TestMultiNode/serial/AddNode 58.04
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.7
236 TestMultiNode/serial/CopyFile 10.12
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 8.39
239 TestMultiNode/serial/RestartKeepsNodes 81.16
240 TestMultiNode/serial/DeleteNode 5.55
241 TestMultiNode/serial/StopMultiNode 23.95
242 TestMultiNode/serial/RestartMultiNode 49.06
243 TestMultiNode/serial/ValidateNameConflict 37.79
248 TestPreload 127.24
250 TestScheduledStopUnix 108.15
253 TestInsufficientStorage 10.5
254 TestRunningBinaryUpgrade 53.04
256 TestKubernetesUpgrade 360.12
257 TestMissingContainerUpgrade 117.53
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 44.14
261 TestNoKubernetes/serial/StartWithStopK8s 28.45
262 TestNoKubernetes/serial/Start 5.65
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
264 TestNoKubernetes/serial/ProfileList 0.66
265 TestNoKubernetes/serial/Stop 1.22
266 TestNoKubernetes/serial/StartNoArgs 8.14
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.47
268 TestStoppedBinaryUpgrade/Setup 0.98
269 TestStoppedBinaryUpgrade/Upgrade 61.95
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
279 TestPause/serial/Start 79.32
280 TestPause/serial/SecondStartNoReconfiguration 28.77
281 TestPause/serial/Pause 0.95
282 TestPause/serial/VerifyStatus 0.35
283 TestPause/serial/Unpause 0.68
284 TestPause/serial/PauseAgain 0.88
285 TestPause/serial/DeletePaused 2.66
286 TestPause/serial/VerifyDeletedResources 0.39
294 TestNetworkPlugins/group/false 4.05
299 TestStartStop/group/old-k8s-version/serial/FirstStart 61.96
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.57
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.24
302 TestStartStop/group/old-k8s-version/serial/Stop 11.97
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
304 TestStartStop/group/old-k8s-version/serial/SecondStart 55.59
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
308 TestStartStop/group/old-k8s-version/serial/Pause 3.16
310 TestStartStop/group/no-preload/serial/FirstStart 72.58
312 TestStartStop/group/embed-certs/serial/FirstStart 80.64
313 TestStartStop/group/no-preload/serial/DeployApp 10.5
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.41
315 TestStartStop/group/no-preload/serial/Stop 12.13
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/no-preload/serial/SecondStart 55.38
318 TestStartStop/group/embed-certs/serial/DeployApp 11.39
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
320 TestStartStop/group/embed-certs/serial/Stop 11.96
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
323 TestStartStop/group/embed-certs/serial/SecondStart 52.76
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.38
326 TestStartStop/group/no-preload/serial/Pause 4.55
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.64
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
332 TestStartStop/group/embed-certs/serial/Pause 3.1
334 TestStartStop/group/newest-cni/serial/FirstStart 35.2
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.4
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
338 TestStartStop/group/newest-cni/serial/Stop 1.27
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/newest-cni/serial/SecondStart 17.3
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.01
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.73
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
348 TestStartStop/group/newest-cni/serial/Pause 3.01
349 TestNetworkPlugins/group/auto/Start 84.85
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.21
354 TestNetworkPlugins/group/kindnet/Start 81.65
355 TestNetworkPlugins/group/auto/KubeletFlags 0.36
356 TestNetworkPlugins/group/auto/NetCatPod 13.39
357 TestNetworkPlugins/group/auto/DNS 0.2
358 TestNetworkPlugins/group/auto/Localhost 0.19
359 TestNetworkPlugins/group/auto/HairPin 0.2
360 TestNetworkPlugins/group/calico/Start 63.12
361 TestNetworkPlugins/group/kindnet/ControllerPod 6
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
363 TestNetworkPlugins/group/kindnet/NetCatPod 12.32
364 TestNetworkPlugins/group/kindnet/DNS 0.24
365 TestNetworkPlugins/group/kindnet/Localhost 0.16
366 TestNetworkPlugins/group/kindnet/HairPin 0.19
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.28
369 TestNetworkPlugins/group/calico/NetCatPod 12.26
370 TestNetworkPlugins/group/custom-flannel/Start 67.86
371 TestNetworkPlugins/group/calico/DNS 0.24
372 TestNetworkPlugins/group/calico/Localhost 0.18
373 TestNetworkPlugins/group/calico/HairPin 0.19
374 TestNetworkPlugins/group/enable-default-cni/Start 78.14
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.42
377 TestNetworkPlugins/group/custom-flannel/DNS 0.18
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
380 TestNetworkPlugins/group/flannel/Start 61.11
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.37
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
386 TestNetworkPlugins/group/bridge/Start 83.28
387 TestNetworkPlugins/group/flannel/ControllerPod 6
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
389 TestNetworkPlugins/group/flannel/NetCatPod 11.34
390 TestNetworkPlugins/group/flannel/DNS 0.24
391 TestNetworkPlugins/group/flannel/Localhost 0.22
392 TestNetworkPlugins/group/flannel/HairPin 0.22
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
394 TestNetworkPlugins/group/bridge/NetCatPod 10.27
395 TestNetworkPlugins/group/bridge/DNS 0.17
396 TestNetworkPlugins/group/bridge/Localhost 0.14
397 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (7.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-337268 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-337268 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.108663998s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0919 22:14:28.238390    4161 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0919 22:14:28.238465    4161 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-2355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-337268
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-337268: exit status 85 (91.098783ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-337268 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-337268 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:21.174855    4166 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:21.174980    4166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:21.174991    4166 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:21.174995    4166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:21.175259    4166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
	W0919 22:14:21.175391    4166 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21594-2355/.minikube/config/config.json: open /home/jenkins/minikube-integration/21594-2355/.minikube/config/config.json: no such file or directory
	I0919 22:14:21.175798    4166 out.go:368] Setting JSON to true
	I0919 22:14:21.176573    4166 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3412,"bootTime":1758316649,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 22:14:21.176650    4166 start.go:140] virtualization:  
	I0919 22:14:21.180854    4166 out.go:99] [download-only-337268] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0919 22:14:21.181050    4166 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21594-2355/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 22:14:21.181165    4166 notify.go:220] Checking for updates...
	I0919 22:14:21.184675    4166 out.go:171] MINIKUBE_LOCATION=21594
	I0919 22:14:21.187646    4166 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:21.190707    4166 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	I0919 22:14:21.193683    4166 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	I0919 22:14:21.196751    4166 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0919 22:14:21.202362    4166 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 22:14:21.202594    4166 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:21.226158    4166 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0919 22:14:21.226293    4166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:21.661024    4166 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-19 22:14:21.651639484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 22:14:21.661139    4166 docker.go:318] overlay module found
	I0919 22:14:21.664225    4166 out.go:99] Using the docker driver based on user configuration
	I0919 22:14:21.664265    4166 start.go:304] selected driver: docker
	I0919 22:14:21.664273    4166 start.go:918] validating driver "docker" against <nil>
	I0919 22:14:21.664385    4166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:21.720488    4166 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-19 22:14:21.71132993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 22:14:21.720646    4166 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:21.720920    4166 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0919 22:14:21.721097    4166 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 22:14:21.724388    4166 out.go:171] Using Docker driver with root privileges
	I0919 22:14:21.727335    4166 cni.go:84] Creating CNI manager for ""
	I0919 22:14:21.727402    4166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 22:14:21.727416    4166 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:14:21.727500    4166 start.go:348] cluster config:
	{Name:download-only-337268 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-337268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:14:21.730564    4166 out.go:99] Starting "download-only-337268" primary control-plane node in "download-only-337268" cluster
	I0919 22:14:21.730589    4166 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:14:21.733475    4166 out.go:99] Pulling base image v0.0.48 ...
	I0919 22:14:21.733507    4166 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0919 22:14:21.733664    4166 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:14:21.749292    4166 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:21.749463    4166 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0919 22:14:21.749567    4166 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:21.801203    4166 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0919 22:14:21.801228    4166 cache.go:58] Caching tarball of preloaded images
	I0919 22:14:21.801405    4166 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0919 22:14:21.804708    4166 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0919 22:14:21.804747    4166 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0919 22:14:21.889398    4166 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21594-2355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-337268 host does not exist
	  To start a cluster, run: "minikube start -p download-only-337268"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-337268
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (6.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-085720 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-085720 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.688584549s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (6.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0919 22:14:35.371990    4161 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0919 22:14:35.372028    4161 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-2355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-085720
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-085720: exit status 85 (60.958598ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-337268 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-337268 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ delete  │ -p download-only-337268                                                                                                                                                   │ download-only-337268 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ start   │ -o=json --download-only -p download-only-085720 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-085720 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:28
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:28.725278    4366 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:28.725388    4366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:28.725398    4366 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:28.725403    4366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:28.725687    4366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
	I0919 22:14:28.726104    4366 out.go:368] Setting JSON to true
	I0919 22:14:28.726874    4366 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3420,"bootTime":1758316649,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 22:14:28.726945    4366 start.go:140] virtualization:  
	I0919 22:14:28.730209    4366 out.go:99] [download-only-085720] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0919 22:14:28.730457    4366 notify.go:220] Checking for updates...
	I0919 22:14:28.733243    4366 out.go:171] MINIKUBE_LOCATION=21594
	I0919 22:14:28.736183    4366 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:28.739140    4366 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	I0919 22:14:28.742006    4366 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	I0919 22:14:28.744897    4366 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0919 22:14:28.750530    4366 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 22:14:28.750766    4366 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:28.771695    4366 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0919 22:14:28.771828    4366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:28.842720    4366 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-19 22:14:28.833855109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 22:14:28.842829    4366 docker.go:318] overlay module found
	I0919 22:14:28.845873    4366 out.go:99] Using the docker driver based on user configuration
	I0919 22:14:28.845910    4366 start.go:304] selected driver: docker
	I0919 22:14:28.845926    4366 start.go:918] validating driver "docker" against <nil>
	I0919 22:14:28.846023    4366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:28.900305    4366 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-19 22:14:28.891271254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 22:14:28.900456    4366 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:28.900757    4366 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0919 22:14:28.900909    4366 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 22:14:28.903928    4366 out.go:171] Using Docker driver with root privileges
	I0919 22:14:28.906668    4366 cni.go:84] Creating CNI manager for ""
	I0919 22:14:28.906729    4366 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 22:14:28.906741    4366 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:14:28.906813    4366 start.go:348] cluster config:
	{Name:download-only-085720 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-085720 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:14:28.909744    4366 out.go:99] Starting "download-only-085720" primary control-plane node in "download-only-085720" cluster
	I0919 22:14:28.909768    4366 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:14:28.912579    4366 out.go:99] Pulling base image v0.0.48 ...
	I0919 22:14:28.912612    4366 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:14:28.912780    4366 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:14:28.928290    4366 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:28.928428    4366 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0919 22:14:28.928452    4366 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0919 22:14:28.928457    4366 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0919 22:14:28.928465    4366 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0919 22:14:28.972049    4366 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0919 22:14:28.972087    4366 cache.go:58] Caching tarball of preloaded images
	I0919 22:14:28.972250    4366 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:14:28.975318    4366 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0919 22:14:28.975345    4366 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0919 22:14:29.064263    4366 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:36555bb244eebf6e383c5e8810b48b3a -> /home/jenkins/minikube-integration/21594-2355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0919 22:14:33.585802    4366 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0919 22:14:33.585913    4366 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21594-2355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-085720 host does not exist
	  To start a cluster, run: "minikube start -p download-only-085720"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-085720
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 22:14:36.590258    4161 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-823172 --alsologtostderr --binary-mirror http://127.0.0.1:33227 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-823172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-823172
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-497709
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-497709: exit status 85 (60.567537ms)

                                                
                                                
-- stdout --
	* Profile "addons-497709" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-497709"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-497709
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-497709: exit status 85 (62.448685ms)

                                                
                                                
-- stdout --
	* Profile "addons-497709" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-497709"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (182.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-497709 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-497709 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m2.519863036s)
--- PASS: TestAddons/Setup (182.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-497709 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-497709 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 23.293137ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-9bs6l" [fddc0661-55c1-4ea7-8f3f-96c0da3e6157] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003681331s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-vw4wm" [517a7877-3698-4b63-bb25-804a2404813a] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003162823s
addons_test.go:392: (dbg) Run:  kubectl --context addons-497709 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-497709 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-497709 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.118048352s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 ip
2025/09/19 22:18:21 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.08s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.621056ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-497709
addons_test.go:332: (dbg) Run:  kubectl --context addons-497709 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-xlwfp" [63f592a8-ef3c-41e3-b127-54005d0acb17] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003905789s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.170417ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-xh9kv" [8789fcd4-548d-41f1-add8-a41bac88d2ee] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004115833s
addons_test.go:463: (dbg) Run:  kubectl --context addons-497709 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0919 22:18:21.789686    4161 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0919 22:18:21.793917    4161 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 22:18:21.793951    4161 kapi.go:107] duration metric: took 4.272546ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.284732ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-497709 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-497709 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [73f04256-10da-46eb-a6f2-356017694492] Pending
helpers_test.go:352: "task-pv-pod" [73f04256-10da-46eb-a6f2-356017694492] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [73f04256-10da-46eb-a6f2-356017694492] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.00364263s
addons_test.go:572: (dbg) Run:  kubectl --context addons-497709 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-497709 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-497709 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-497709 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-497709 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-497709 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-497709 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [3724910e-ddb7-4e52-ae68-bfc5b0b85274] Pending
helpers_test.go:352: "task-pv-pod-restore" [3724910e-ddb7-4e52-ae68-bfc5b0b85274] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [3724910e-ddb7-4e52-ae68-bfc5b0b85274] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003165724s
addons_test.go:614: (dbg) Run:  kubectl --context addons-497709 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-497709 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-497709 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-497709 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.795790623s)
--- PASS: TestAddons/parallel/CSI (55.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-497709 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-q2b6k" [bd0c1045-9d40-457f-93d5-b659c954b081] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-q2b6k" [bd0c1045-9d40-457f-93d5-b659c954b081] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-q2b6k" [bd0c1045-9d40-457f-93d5-b659c954b081] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004518443s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-497709 addons disable headlamp --alsologtostderr -v=1: (5.780819937s)
--- PASS: TestAddons/parallel/Headlamp (18.72s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-smhwp" [c9730761-04ca-4b32-8553-df1bbb6cb4e5] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003420451s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-497709 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-497709 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-497709 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [e22b1b9a-e8e7-486f-8557-4f6b2c9a64bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [e22b1b9a-e8e7-486f-8557-4f6b2c9a64bb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [e22b1b9a-e8e7-486f-8557-4f6b2c9a64bb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003546376s
addons_test.go:967: (dbg) Run:  kubectl --context addons-497709 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 ssh "cat /opt/local-path-provisioner/pvc-024e6e4e-85e0-4958-a7fa-ec7c318c7704_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-497709 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-497709 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-497709 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.920728637s)
--- PASS: TestAddons/parallel/LocalPath (52.01s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-k5jwt" [30c55f34-f5da-4ea2-a567-585f81ada4f1] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003886553s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-hlks9" [1231fd24-3574-4cff-98d2-cb4134b89fc7] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002973042s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-497709 addons disable yakd --alsologtostderr -v=1: (5.759308888s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-497709
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-497709: (11.918532615s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-497709
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-497709
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-497709
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (39.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-627563 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0919 23:12:23.740540    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:12:40.667772    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-627563 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.688209758s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-627563 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-627563 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-627563 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-627563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-627563
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-627563: (2.043734788s)
--- PASS: TestCertOptions (39.79s)

                                                
                                    
x
+
TestCertExpiration (256.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-486110 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-486110 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.899515817s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-486110 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-486110 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (32.060853551s)
helpers_test.go:175: Cleaning up "cert-expiration-486110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-486110
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-486110: (2.737481195s)
--- PASS: TestCertExpiration (256.70s)

                                                
                                    
x
+
TestForceSystemdFlag (35.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-357269 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-357269 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.567929101s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-357269 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-357269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-357269
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-357269: (2.411054228s)
--- PASS: TestForceSystemdFlag (35.28s)

                                                
                                    
x
+
TestForceSystemdEnv (44.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-520192 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-520192 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.120231063s)
helpers_test.go:175: Cleaning up "force-systemd-env-520192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-520192
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-520192: (2.975471721s)
--- PASS: TestForceSystemdEnv (44.10s)

                                                
                                    
x
+
TestErrorSpam/setup (29.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-753446 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-753446 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-753446 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-753446 --driver=docker  --container-runtime=crio: (29.469574665s)
--- PASS: TestErrorSpam/setup (29.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 unpause
--- PASS: TestErrorSpam/unpause (2.27s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 stop: (1.293640753s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-753446 --log_dir /tmp/nospam-753446 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21594-2355/.minikube/files/etc/test/nested/copy/4161/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.67s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995015 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0919 22:22:40.667503    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:40.673907    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:40.685304    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:40.706674    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:40.748184    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:40.829629    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:40.991218    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:41.312592    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:41.954584    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:43.235919    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:45.797365    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:50.919322    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:23:01.161634    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:23:21.643028    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-995015 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.669934971s)
--- PASS: TestFunctional/serial/StartWithProxy (80.67s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 22:23:29.594556    4161 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995015 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-995015 --alsologtostderr -v=8: (27.955162501s)
functional_test.go:678: soft start took 27.95572259s for "functional-995015" cluster.
I0919 22:23:57.550020    4161 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (27.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-995015 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-995015 cache add registry.k8s.io/pause:3.1: (1.278113431s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-995015 cache add registry.k8s.io/pause:3.3: (1.637295751s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-995015 cache add registry.k8s.io/pause:latest: (1.379340712s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-995015 /tmp/TestFunctionalserialCacheCmdcacheadd_local3726250317/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 cache add minikube-local-cache-test:functional-995015
E0919 22:24:02.605037    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 cache delete minikube-local-cache-test:functional-995015
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-995015
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.366354ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-995015 cache reload: (1.15433521s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 kubectl -- --context functional-995015 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-995015 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995015 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-995015 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.664822104s)
functional_test.go:776: restart took 36.664937084s for "functional-995015" cluster.
I0919 22:24:43.038648    4161 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (36.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-995015 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-995015 logs: (1.848970257s)
--- PASS: TestFunctional/serial/LogsCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 logs --file /tmp/TestFunctionalserialLogsFileCmd674888503/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-995015 logs --file /tmp/TestFunctionalserialLogsFileCmd674888503/001/logs.txt: (1.980117308s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.98s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.97s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-995015 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-995015
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-995015: exit status 115 (590.855397ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32379 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-995015 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-995015 delete -f testdata/invalidsvc.yaml: (1.113228147s)
--- PASS: TestFunctional/serial/InvalidService (4.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 config get cpus: exit status 14 (55.120229ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 config get cpus: exit status 14 (65.501291ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-995015 --alsologtostderr -v=1]
2025/09/19 22:35:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-995015 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 35577: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995015 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-995015 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (188.72549ms)

                                                
                                                
-- stdout --
	* [functional-995015] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:35:17.818836   35292 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:35:17.819024   35292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:35:17.819038   35292 out.go:374] Setting ErrFile to fd 2...
	I0919 22:35:17.819043   35292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:35:17.819365   35292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
	I0919 22:35:17.819784   35292 out.go:368] Setting JSON to false
	I0919 22:35:17.820663   35292 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4669,"bootTime":1758316649,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 22:35:17.820735   35292 start.go:140] virtualization:  
	I0919 22:35:17.824578   35292 out.go:179] * [functional-995015] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0919 22:35:17.828220   35292 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:35:17.828297   35292 notify.go:220] Checking for updates...
	I0919 22:35:17.834100   35292 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:35:17.837017   35292 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	I0919 22:35:17.839896   35292 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	I0919 22:35:17.842967   35292 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 22:35:17.846041   35292 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:35:17.849541   35292 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:35:17.850119   35292 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:35:17.880591   35292 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0919 22:35:17.880699   35292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:35:17.936196   35292 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-19 22:35:17.926685296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 22:35:17.936297   35292 docker.go:318] overlay module found
	I0919 22:35:17.939324   35292 out.go:179] * Using the docker driver based on existing profile
	I0919 22:35:17.942144   35292 start.go:304] selected driver: docker
	I0919 22:35:17.942164   35292 start.go:918] validating driver "docker" against &{Name:functional-995015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-995015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:35:17.942453   35292 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:35:17.945931   35292 out.go:203] 
	W0919 22:35:17.948846   35292 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 22:35:17.952144   35292 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995015 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-995015 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-995015 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (193.778053ms)

                                                
                                                
-- stdout --
	* [functional-995015] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:35:18.291814   35410 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:35:18.292025   35410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:35:18.292047   35410 out.go:374] Setting ErrFile to fd 2...
	I0919 22:35:18.292067   35410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:35:18.292458   35410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
	I0919 22:35:18.292879   35410 out.go:368] Setting JSON to false
	I0919 22:35:18.293741   35410 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4669,"bootTime":1758316649,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 22:35:18.293838   35410 start.go:140] virtualization:  
	I0919 22:35:18.297107   35410 out.go:179] * [functional-995015] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0919 22:35:18.300897   35410 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:35:18.300982   35410 notify.go:220] Checking for updates...
	I0919 22:35:18.306970   35410 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:35:18.309808   35410 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	I0919 22:35:18.312727   35410 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	I0919 22:35:18.315568   35410 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 22:35:18.318430   35410 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:35:18.322043   35410 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:35:18.322669   35410 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:35:18.351314   35410 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0919 22:35:18.351434   35410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:35:18.412053   35410 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-19 22:35:18.403101505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 22:35:18.412161   35410 docker.go:318] overlay module found
	I0919 22:35:18.415418   35410 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0919 22:35:18.418357   35410 start.go:304] selected driver: docker
	I0919 22:35:18.418389   35410 start.go:918] validating driver "docker" against &{Name:functional-995015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-995015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:35:18.418574   35410 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:35:18.422089   35410 out.go:203] 
	W0919 22:35:18.424959   35410 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 22:35:18.427820   35410 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [55683448-21dd-4156-a815-2abffbe72970] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00483712s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-995015 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-995015 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-995015 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-995015 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6be73c4a-af50-47e7-9f30-174741f2dc81] Pending
helpers_test.go:352: "sp-pod" [6be73c4a-af50-47e7-9f30-174741f2dc81] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6be73c4a-af50-47e7-9f30-174741f2dc81] Running
E0919 22:25:24.527168    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003688891s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-995015 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-995015 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-995015 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c588752f-0cc9-4fb9-a4ea-c18b404df0cf] Pending
helpers_test.go:352: "sp-pod" [c588752f-0cc9-4fb9-a4ea-c18b404df0cf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003649446s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-995015 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh -n functional-995015 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 cp functional-995015:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3223973614/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh -n functional-995015 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh -n functional-995015 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4161/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "sudo cat /etc/test/nested/copy/4161/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4161.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "sudo cat /etc/ssl/certs/4161.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4161.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "sudo cat /usr/share/ca-certificates/4161.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41612.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "sudo cat /etc/ssl/certs/41612.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41612.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "sudo cat /usr/share/ca-certificates/41612.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-995015 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 ssh "sudo systemctl is-active docker": exit status 1 (392.746006ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 ssh "sudo systemctl is-active containerd": exit status 1 (346.330035ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-995015 version -o=json --components: (1.155688953s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-995015 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-995015
localhost/kicbase/echo-server:functional-995015
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-995015 image ls --format short --alsologtostderr:
I0919 22:35:28.524798   35965 out.go:360] Setting OutFile to fd 1 ...
I0919 22:35:28.525017   35965 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:28.525030   35965 out.go:374] Setting ErrFile to fd 2...
I0919 22:35:28.525035   35965 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:28.525338   35965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
I0919 22:35:28.526090   35965 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:28.526251   35965 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:28.526784   35965 cli_runner.go:164] Run: docker container inspect functional-995015 --format={{.State.Status}}
I0919 22:35:28.545066   35965 ssh_runner.go:195] Run: systemctl --version
I0919 22:35:28.545132   35965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995015
I0919 22:35:28.565202   35965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/functional-995015/id_rsa Username:docker}
I0919 22:35:28.658607   35965 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-995015 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/minikube-local-cache-test     │ functional-995015  │ 0df51e09838d6 │ 3.33kB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ alpine             │ 35f3cbee4fb77 │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 17848b7d08d19 │ 202MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ 996be7e86d9b3 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ a25f5ef9c34c3 │ 51.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ d291939e99406 │ 84.8MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ localhost/kicbase/echo-server           │ functional-995015  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/my-image                      │ functional-995015  │ 45c11fff3dc01 │ 1.64MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ 6fc32d66c1411 │ 75.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-995015 image ls --format table --alsologtostderr:
I0919 22:35:33.349824   36312 out.go:360] Setting OutFile to fd 1 ...
I0919 22:35:33.350027   36312 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:33.350039   36312 out.go:374] Setting ErrFile to fd 2...
I0919 22:35:33.350044   36312 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:33.350343   36312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
I0919 22:35:33.350977   36312 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:33.351105   36312 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:33.351555   36312 cli_runner.go:164] Run: docker container inspect functional-995015 --format={{.State.Status}}
I0919 22:35:33.370204   36312 ssh_runner.go:195] Run: systemctl --version
I0919 22:35:33.370279   36312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995015
I0919 22:35:33.389586   36312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/functional-995015/id_rsa Username:docker}
I0919 22:35:33.486876   36312 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-995015 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"17848b7d08d196d4e7b420f48ba286132a07937574561d4a6c085651f5177f97","repoDigests":["docker.io/library/nginx@sha256:059ceb5a1ded7032703d6290061911adc8a9c55298f372daaf63801600ec894e","docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e"],"repoTags":["docker.io/library/nginx:latest"],"size":"202036629"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s
-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"0df51e09838d60db260dd561abb880838f307b736da5a0ac66eec1013ac1317f","repoDigests":["localhost/minikube-local-cache-test@sha256:0163a756f990b308b0f8083d435c0a69f1615bb82e2f8a64108b3be73d55bf4d"],"repoTags":["localhost/minikube-local-cache-test:functional-995015"],"size":"3330"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","r
epoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ea0061ef2bdb84fcb44e9e0d1c7987b5a3c2d9bddc004bbf74d9294d2b382264","repoDigests":["docker.io/library/d5dcc82a54a9c49268d0a21de67a0fa3bdf1cee3e5028e16b7319de468028104-tmp@sha256:1fc531efd5f8f09b499eeb03c423c41f4ca48bcc77f9c2769587a15755a5a68f"],"repoTags":[],"size":"1637644"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"45c11fff3dc01fbc6677d85c230d492afa884e059d8ff69a1e041ebefcd45700","repoDigests":["localhost/my-image@sha256:f65e
9afff1555cd49741e5cbc23cc00aee5a99aeeaac8755232575a389b04d0d"],"repoTags":["localhost/my-image:functional-995015"],"size":"1640226"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry
.k8s.io/pause:latest"],"size":"246070"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54348302"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262
566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"84818927"},{"id":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"72629077"},{"id":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-schedule
r@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"51592021"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-995015"],"size":"4788229"},{"id":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435
c70c6ad77f527040f4334b88076500e883e"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"75938711"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-995015 image ls --format json --alsologtostderr:
I0919 22:35:33.118821   36281 out.go:360] Setting OutFile to fd 1 ...
I0919 22:35:33.118982   36281 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:33.119010   36281 out.go:374] Setting ErrFile to fd 2...
I0919 22:35:33.119030   36281 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:33.119306   36281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
I0919 22:35:33.119927   36281 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:33.120091   36281 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:33.120568   36281 cli_runner.go:164] Run: docker container inspect functional-995015 --format={{.State.Status}}
I0919 22:35:33.137438   36281 ssh_runner.go:195] Run: systemctl --version
I0919 22:35:33.137513   36281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995015
I0919 22:35:33.156251   36281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/functional-995015/id_rsa Username:docker}
I0919 22:35:33.254842   36281 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-995015 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 0df51e09838d60db260dd561abb880838f307b736da5a0ac66eec1013ac1317f
repoDigests:
- localhost/minikube-local-cache-test@sha256:0163a756f990b308b0f8083d435c0a69f1615bb82e2f8a64108b3be73d55bf4d
repoTags:
- localhost/minikube-local-cache-test:functional-995015
size: "3330"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 17848b7d08d196d4e7b420f48ba286132a07937574561d4a6c085651f5177f97
repoDigests:
- docker.io/library/nginx@sha256:059ceb5a1ded7032703d6290061911adc8a9c55298f372daaf63801600ec894e
- docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e
repoTags:
- docker.io/library/nginx:latest
size: "202036629"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-995015
size: "4788229"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "51592021"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "84818927"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac
repoTags:
- docker.io/library/nginx:alpine
size: "54348302"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "72629077"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "75938711"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-995015 image ls --format yaml --alsologtostderr:
I0919 22:35:28.756473   35995 out.go:360] Setting OutFile to fd 1 ...
I0919 22:35:28.756696   35995 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:28.756711   35995 out.go:374] Setting ErrFile to fd 2...
I0919 22:35:28.756715   35995 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:28.756998   35995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
I0919 22:35:28.757645   35995 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:28.757811   35995 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:28.758315   35995 cli_runner.go:164] Run: docker container inspect functional-995015 --format={{.State.Status}}
I0919 22:35:28.777184   35995 ssh_runner.go:195] Run: systemctl --version
I0919 22:35:28.777253   35995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995015
I0919 22:35:28.797283   35995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/functional-995015/id_rsa Username:docker}
I0919 22:35:28.890717   35995 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 ssh pgrep buildkitd: exit status 1 (272.76245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image build -t localhost/my-image:functional-995015 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-995015 image build -t localhost/my-image:functional-995015 testdata/build --alsologtostderr: (3.621541276s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-995015 image build -t localhost/my-image:functional-995015 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ea0061ef2bd
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-995015
--> 45c11fff3dc
Successfully tagged localhost/my-image:functional-995015
45c11fff3dc01fbc6677d85c230d492afa884e059d8ff69a1e041ebefcd45700
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-995015 image build -t localhost/my-image:functional-995015 testdata/build --alsologtostderr:
I0919 22:35:29.254859   36084 out.go:360] Setting OutFile to fd 1 ...
I0919 22:35:29.255001   36084 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:29.255012   36084 out.go:374] Setting ErrFile to fd 2...
I0919 22:35:29.255016   36084 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:29.255255   36084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
I0919 22:35:29.255984   36084 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:29.256559   36084 config.go:182] Loaded profile config "functional-995015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:29.257017   36084 cli_runner.go:164] Run: docker container inspect functional-995015 --format={{.State.Status}}
I0919 22:35:29.275423   36084 ssh_runner.go:195] Run: systemctl --version
I0919 22:35:29.275481   36084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995015
I0919 22:35:29.294038   36084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/functional-995015/id_rsa Username:docker}
I0919 22:35:29.390893   36084 build_images.go:161] Building image from path: /tmp/build.1946443446.tar
I0919 22:35:29.391063   36084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 22:35:29.400857   36084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1946443446.tar
I0919 22:35:29.404537   36084 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1946443446.tar: stat -c "%s %y" /var/lib/minikube/build/build.1946443446.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1946443446.tar': No such file or directory
I0919 22:35:29.404582   36084 ssh_runner.go:362] scp /tmp/build.1946443446.tar --> /var/lib/minikube/build/build.1946443446.tar (3072 bytes)
I0919 22:35:29.429111   36084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1946443446
I0919 22:35:29.437617   36084 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1946443446 -xf /var/lib/minikube/build/build.1946443446.tar
I0919 22:35:29.447179   36084 crio.go:315] Building image: /var/lib/minikube/build/build.1946443446
I0919 22:35:29.447268   36084 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-995015 /var/lib/minikube/build/build.1946443446 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0919 22:35:32.808064   36084 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-995015 /var/lib/minikube/build/build.1946443446 --cgroup-manager=cgroupfs: (3.360766389s)
I0919 22:35:32.808129   36084 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1946443446
I0919 22:35:32.817198   36084 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1946443446.tar
I0919 22:35:32.825590   36084 build_images.go:217] Built localhost/my-image:functional-995015 from /tmp/build.1946443446.tar
I0919 22:35:32.825617   36084 build_images.go:133] succeeded building to: functional-995015
I0919 22:35:32.825622   36084 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-995015
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image load --daemon kicbase/echo-server:functional-995015 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-995015 image load --daemon kicbase/echo-server:functional-995015 --alsologtostderr: (1.485776047s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image load --daemon kicbase/echo-server:functional-995015 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-995015
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image load --daemon kicbase/echo-server:functional-995015 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image save kicbase/echo-server:functional-995015 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image rm kicbase/echo-server:functional-995015 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-995015
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 image save --daemon kicbase/echo-server:functional-995015 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-995015
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-995015 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-995015 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-995015 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 31859: os: process already finished
helpers_test.go:525: unable to kill pid 31738: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-995015 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-995015 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-995015 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6b2bcaa6-28a5-4b5c-838f-d81fd7fe2efd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [6b2bcaa6-28a5-4b5c-838f-d81fd7fe2efd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003502255s
I0919 22:25:11.533710    4161 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-995015 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.199.232 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-995015 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 service list -o json
functional_test.go:1504: Took "500.375539ms" to run "out/minikube-linux-arm64 -p functional-995015 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "358.643744ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.635049ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "357.483674ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.952895ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-995015 /tmp/TestFunctionalparallelMountCmdany-port1776610552/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758321305299469854" to /tmp/TestFunctionalparallelMountCmdany-port1776610552/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758321305299469854" to /tmp/TestFunctionalparallelMountCmdany-port1776610552/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758321305299469854" to /tmp/TestFunctionalparallelMountCmdany-port1776610552/001/test-1758321305299469854
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (333.057603ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:35:05.632770    4161 retry.go:31] will retry after 352.977935ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 22:35 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 22:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 22:35 test-1758321305299469854
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh cat /mount-9p/test-1758321305299469854
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-995015 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d78dd572-92b0-4d41-8847-6713d1c5f06d] Pending
helpers_test.go:352: "busybox-mount" [d78dd572-92b0-4d41-8847-6713d1c5f06d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d78dd572-92b0-4d41-8847-6713d1c5f06d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d78dd572-92b0-4d41-8847-6713d1c5f06d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003207988s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-995015 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995015 /tmp/TestFunctionalparallelMountCmdany-port1776610552/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-995015 /tmp/TestFunctionalparallelMountCmdspecific-port2463856842/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (369.260575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:35:14.333896    4161 retry.go:31] will retry after 252.03944ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995015 /tmp/TestFunctionalparallelMountCmdspecific-port2463856842/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 ssh "sudo umount -f /mount-9p": exit status 1 (267.646536ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-995015 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995015 /tmp/TestFunctionalparallelMountCmdspecific-port2463856842/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-995015 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1063581786/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-995015 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1063581786/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-995015 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1063581786/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-995015 ssh "findmnt -T" /mount1: exit status 1 (543.428893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:35:16.176714    4161 retry.go:31] will retry after 710.334554ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-995015 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-995015 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995015 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1063581786/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995015 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1063581786/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-995015 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1063581786/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-995015
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-995015
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-995015
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0919 22:37:40.670241    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m19.762703793s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (200.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (42.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- rollout status deployment/busybox
E0919 22:39:03.731855    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 kubectl -- rollout status deployment/busybox: (6.388096299s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
I0919 22:39:09.873503    4161 retry.go:31] will retry after 1.040923216s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
I0919 22:39:11.094184    4161 retry.go:31] will retry after 1.009088561s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
I0919 22:39:12.269769    4161 retry.go:31] will retry after 2.572522283s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
I0919 22:39:15.021119    4161 retry.go:31] will retry after 2.792419249s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
I0919 22:39:17.967769    4161 retry.go:31] will retry after 4.682158733s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
I0919 22:39:22.843703    4161 retry.go:31] will retry after 3.830617511s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
I0919 22:39:26.865335    4161 retry.go:31] will retry after 16.390178263s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.2 10.244.0.4 10.244.2.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-mt8n5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-v5fhx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-zzvhz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-mt8n5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-v5fhx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-zzvhz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-mt8n5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-v5fhx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-zzvhz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (42.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-mt8n5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-mt8n5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-v5fhx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-v5fhx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-zzvhz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 kubectl -- exec busybox-7b57f96db7-zzvhz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 node add --alsologtostderr -v 5
E0919 22:39:56.898709    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:56.905012    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:56.916430    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:56.937734    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:56.979089    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:57.060892    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:57.229007    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:57.550798    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:58.192993    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:59.474772    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:40:02.036683    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:40:07.160337    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:40:17.402398    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 node add --alsologtostderr -v 5: (30.232089274s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5: (1.043157223s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-800727 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.048812954s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 status --output json --alsologtostderr -v 5: (1.003151292s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp testdata/cp-test.txt ha-800727:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile260539725/001/cp-test_ha-800727.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727:/home/docker/cp-test.txt ha-800727-m02:/home/docker/cp-test_ha-800727_ha-800727-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m02 "sudo cat /home/docker/cp-test_ha-800727_ha-800727-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727:/home/docker/cp-test.txt ha-800727-m03:/home/docker/cp-test_ha-800727_ha-800727-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m03 "sudo cat /home/docker/cp-test_ha-800727_ha-800727-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727:/home/docker/cp-test.txt ha-800727-m04:/home/docker/cp-test_ha-800727_ha-800727-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m04 "sudo cat /home/docker/cp-test_ha-800727_ha-800727-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp testdata/cp-test.txt ha-800727-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile260539725/001/cp-test_ha-800727-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m02:/home/docker/cp-test.txt ha-800727:/home/docker/cp-test_ha-800727-m02_ha-800727.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727 "sudo cat /home/docker/cp-test_ha-800727-m02_ha-800727.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m02:/home/docker/cp-test.txt ha-800727-m03:/home/docker/cp-test_ha-800727-m02_ha-800727-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m03 "sudo cat /home/docker/cp-test_ha-800727-m02_ha-800727-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m02:/home/docker/cp-test.txt ha-800727-m04:/home/docker/cp-test_ha-800727-m02_ha-800727-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m04 "sudo cat /home/docker/cp-test_ha-800727-m02_ha-800727-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp testdata/cp-test.txt ha-800727-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile260539725/001/cp-test_ha-800727-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m03:/home/docker/cp-test.txt ha-800727:/home/docker/cp-test_ha-800727-m03_ha-800727.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727 "sudo cat /home/docker/cp-test_ha-800727-m03_ha-800727.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m03:/home/docker/cp-test.txt ha-800727-m02:/home/docker/cp-test_ha-800727-m03_ha-800727-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m02 "sudo cat /home/docker/cp-test_ha-800727-m03_ha-800727-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m03:/home/docker/cp-test.txt ha-800727-m04:/home/docker/cp-test_ha-800727-m03_ha-800727-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m04 "sudo cat /home/docker/cp-test_ha-800727-m03_ha-800727-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp testdata/cp-test.txt ha-800727-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile260539725/001/cp-test_ha-800727-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m04:/home/docker/cp-test.txt ha-800727:/home/docker/cp-test_ha-800727-m04_ha-800727.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727 "sudo cat /home/docker/cp-test_ha-800727-m04_ha-800727.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m04:/home/docker/cp-test.txt ha-800727-m02:/home/docker/cp-test_ha-800727-m04_ha-800727-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m04 "sudo cat /home/docker/cp-test.txt"
E0919 22:40:37.883906    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m02 "sudo cat /home/docker/cp-test_ha-800727-m04_ha-800727-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 cp ha-800727-m04:/home/docker/cp-test.txt ha-800727-m03:/home/docker/cp-test_ha-800727-m04_ha-800727-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 ssh -n ha-800727-m03 "sudo cat /home/docker/cp-test_ha-800727-m04_ha-800727-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 node stop m02 --alsologtostderr -v 5: (11.954666857s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5: exit status 7 (783.661735ms)

                                                
                                                
-- stdout --
	ha-800727
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-800727-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-800727-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-800727-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:40:51.493880   52682 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:40:51.494004   52682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:40:51.494015   52682 out.go:374] Setting ErrFile to fd 2...
	I0919 22:40:51.494021   52682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:40:51.494425   52682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
	I0919 22:40:51.494647   52682 out.go:368] Setting JSON to false
	I0919 22:40:51.494698   52682 mustload.go:65] Loading cluster: ha-800727
	I0919 22:40:51.494779   52682 notify.go:220] Checking for updates...
	I0919 22:40:51.495718   52682 config.go:182] Loaded profile config "ha-800727": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:40:51.495748   52682 status.go:174] checking status of ha-800727 ...
	I0919 22:40:51.496328   52682 cli_runner.go:164] Run: docker container inspect ha-800727 --format={{.State.Status}}
	I0919 22:40:51.518298   52682 status.go:371] ha-800727 host status = "Running" (err=<nil>)
	I0919 22:40:51.518323   52682 host.go:66] Checking if "ha-800727" exists ...
	I0919 22:40:51.518636   52682 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-800727
	I0919 22:40:51.548531   52682 host.go:66] Checking if "ha-800727" exists ...
	I0919 22:40:51.548906   52682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:51.548961   52682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-800727
	I0919 22:40:51.568906   52682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/ha-800727/id_rsa Username:docker}
	I0919 22:40:51.667936   52682 ssh_runner.go:195] Run: systemctl --version
	I0919 22:40:51.673327   52682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:40:51.688490   52682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:40:51.760635   52682 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-19 22:40:51.749711611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 22:40:51.761174   52682 kubeconfig.go:125] found "ha-800727" server: "https://192.168.49.254:8443"
	I0919 22:40:51.761208   52682 api_server.go:166] Checking apiserver status ...
	I0919 22:40:51.761261   52682 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:40:51.773497   52682 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1447/cgroup
	I0919 22:40:51.783545   52682 api_server.go:182] apiserver freezer: "10:freezer:/docker/3d44f1e3a5ad29e0fb91b410d7ef12267c69f757973fb8a353d7ccf801c8dfee/crio/crio-8372d44f15e21ed85949e72982bd6986b0d8349ccd00c5e1c5ab35729b1d01e0"
	I0919 22:40:51.783629   52682 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3d44f1e3a5ad29e0fb91b410d7ef12267c69f757973fb8a353d7ccf801c8dfee/crio/crio-8372d44f15e21ed85949e72982bd6986b0d8349ccd00c5e1c5ab35729b1d01e0/freezer.state
	I0919 22:40:51.792984   52682 api_server.go:204] freezer state: "THAWED"
	I0919 22:40:51.793016   52682 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:40:51.802069   52682 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:40:51.802097   52682 status.go:463] ha-800727 apiserver status = Running (err=<nil>)
	I0919 22:40:51.802108   52682 status.go:176] ha-800727 status: &{Name:ha-800727 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:40:51.802135   52682 status.go:174] checking status of ha-800727-m02 ...
	I0919 22:40:51.802517   52682 cli_runner.go:164] Run: docker container inspect ha-800727-m02 --format={{.State.Status}}
	I0919 22:40:51.825352   52682 status.go:371] ha-800727-m02 host status = "Stopped" (err=<nil>)
	I0919 22:40:51.825377   52682 status.go:384] host is not running, skipping remaining checks
	I0919 22:40:51.825384   52682 status.go:176] ha-800727-m02 status: &{Name:ha-800727-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:40:51.825405   52682 status.go:174] checking status of ha-800727-m03 ...
	I0919 22:40:51.825728   52682 cli_runner.go:164] Run: docker container inspect ha-800727-m03 --format={{.State.Status}}
	I0919 22:40:51.845860   52682 status.go:371] ha-800727-m03 host status = "Running" (err=<nil>)
	I0919 22:40:51.845887   52682 host.go:66] Checking if "ha-800727-m03" exists ...
	I0919 22:40:51.846202   52682 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-800727-m03
	I0919 22:40:51.865263   52682 host.go:66] Checking if "ha-800727-m03" exists ...
	I0919 22:40:51.865574   52682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:51.865612   52682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-800727-m03
	I0919 22:40:51.884104   52682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/ha-800727-m03/id_rsa Username:docker}
	I0919 22:40:51.979499   52682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:40:51.991568   52682 kubeconfig.go:125] found "ha-800727" server: "https://192.168.49.254:8443"
	I0919 22:40:51.991598   52682 api_server.go:166] Checking apiserver status ...
	I0919 22:40:51.991646   52682 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:40:52.005899   52682 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1349/cgroup
	I0919 22:40:52.017297   52682 api_server.go:182] apiserver freezer: "10:freezer:/docker/a41e4eb5be49a1c4ca945bd992aa2f8427526e369c4125ed0f72ccd03d31df49/crio/crio-b87a5ec4ac0e056585cb746dc2a941edf529bbdf3c09dcd97e258f6f7586be08"
	I0919 22:40:52.017420   52682 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a41e4eb5be49a1c4ca945bd992aa2f8427526e369c4125ed0f72ccd03d31df49/crio/crio-b87a5ec4ac0e056585cb746dc2a941edf529bbdf3c09dcd97e258f6f7586be08/freezer.state
	I0919 22:40:52.027130   52682 api_server.go:204] freezer state: "THAWED"
	I0919 22:40:52.027171   52682 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:40:52.035357   52682 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:40:52.035390   52682 status.go:463] ha-800727-m03 apiserver status = Running (err=<nil>)
	I0919 22:40:52.035401   52682 status.go:176] ha-800727-m03 status: &{Name:ha-800727-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:40:52.035419   52682 status.go:174] checking status of ha-800727-m04 ...
	I0919 22:40:52.035730   52682 cli_runner.go:164] Run: docker container inspect ha-800727-m04 --format={{.State.Status}}
	I0919 22:40:52.054457   52682 status.go:371] ha-800727-m04 host status = "Running" (err=<nil>)
	I0919 22:40:52.054506   52682 host.go:66] Checking if "ha-800727-m04" exists ...
	I0919 22:40:52.054802   52682 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-800727-m04
	I0919 22:40:52.073308   52682 host.go:66] Checking if "ha-800727-m04" exists ...
	I0919 22:40:52.073621   52682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:40:52.073671   52682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-800727-m04
	I0919 22:40:52.094750   52682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/ha-800727-m04/id_rsa Username:docker}
	I0919 22:40:52.195604   52682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:40:52.207766   52682 status.go:176] ha-800727-m04 status: &{Name:ha-800727-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 node start m02 --alsologtostderr -v 5
E0919 22:41:18.845758    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 node start m02 --alsologtostderr -v 5: (31.736890956s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5: (1.177323942s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.202995746s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (121.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 stop --alsologtostderr -v 5: (26.827417976s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 start --wait true --alsologtostderr -v 5
E0919 22:42:40.667465    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:42:40.767223    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 start --wait true --alsologtostderr -v 5: (1m34.55825315s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (121.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 node delete m03 --alsologtostderr -v 5: (11.272999566s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 stop --alsologtostderr -v 5: (35.570453265s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5: exit status 7 (116.932824ms)

                                                
                                                
-- stdout --
	ha-800727
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-800727-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-800727-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:44:17.387437   66512 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:44:17.387630   66512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:44:17.387659   66512 out.go:374] Setting ErrFile to fd 2...
	I0919 22:44:17.387678   66512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:44:17.387968   66512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
	I0919 22:44:17.388198   66512 out.go:368] Setting JSON to false
	I0919 22:44:17.388252   66512 mustload.go:65] Loading cluster: ha-800727
	I0919 22:44:17.388282   66512 notify.go:220] Checking for updates...
	I0919 22:44:17.388711   66512 config.go:182] Loaded profile config "ha-800727": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:44:17.388754   66512 status.go:174] checking status of ha-800727 ...
	I0919 22:44:17.389293   66512 cli_runner.go:164] Run: docker container inspect ha-800727 --format={{.State.Status}}
	I0919 22:44:17.409143   66512 status.go:371] ha-800727 host status = "Stopped" (err=<nil>)
	I0919 22:44:17.409165   66512 status.go:384] host is not running, skipping remaining checks
	I0919 22:44:17.409172   66512 status.go:176] ha-800727 status: &{Name:ha-800727 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:44:17.409199   66512 status.go:174] checking status of ha-800727-m02 ...
	I0919 22:44:17.409507   66512 cli_runner.go:164] Run: docker container inspect ha-800727-m02 --format={{.State.Status}}
	I0919 22:44:17.435642   66512 status.go:371] ha-800727-m02 host status = "Stopped" (err=<nil>)
	I0919 22:44:17.435661   66512 status.go:384] host is not running, skipping remaining checks
	I0919 22:44:17.435668   66512 status.go:176] ha-800727-m02 status: &{Name:ha-800727-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:44:17.435686   66512 status.go:174] checking status of ha-800727-m04 ...
	I0919 22:44:17.435980   66512 cli_runner.go:164] Run: docker container inspect ha-800727-m04 --format={{.State.Status}}
	I0919 22:44:17.453696   66512 status.go:371] ha-800727-m04 host status = "Stopped" (err=<nil>)
	I0919 22:44:17.453717   66512 status.go:384] host is not running, skipping remaining checks
	I0919 22:44:17.453734   66512 status.go:176] ha-800727-m04 status: &{Name:ha-800727-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0919 22:44:56.899343    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:45:24.608483    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m16.724961225s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 node add --control-plane --alsologtostderr -v 5: (1m20.508755679s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-800727 status --alsologtostderr -v 5: (1.040445616s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.025779546s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.56s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-343037 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0919 22:47:40.668276    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-343037 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.559759541s)
--- PASS: TestJSONOutput/start/Command (82.56s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-343037 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-343037 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-343037 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-343037 --output=json --user=testUser: (5.844022961s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-560975 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-560975 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (91.195432ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0ca9ad5a-655f-4ebe-877c-206723e6df1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-560975] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fa61358-8afb-4dd0-87bf-ecc0182d7223","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21594"}}
	{"specversion":"1.0","id":"148bbbb6-e2d0-400d-9127-b838253290b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fa7fbc5f-ca9e-4fe7-9bfd-2981fa50367b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig"}}
	{"specversion":"1.0","id":"5b4c15c9-1bb9-4b5a-9cba-d418a3166ca5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube"}}
	{"specversion":"1.0","id":"b292aaeb-440f-45dc-90a7-1a3159b45475","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"555c5e64-cbf2-4959-bb77-7c0017451f7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a6b8943d-d19f-468c-9836-899b470863e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-560975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-560975
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-047405 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-047405 --network=: (42.58203217s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-047405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-047405
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-047405: (2.058352103s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.66s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-426039 --network=bridge
E0919 22:49:56.899192    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-426039 --network=bridge: (33.765868747s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-426039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-426039
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-426039: (2.048399318s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.84s)

                                                
                                    
x
+
TestKicExistingNetwork (36.06s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0919 22:50:03.106835    4161 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0919 22:50:03.122349    4161 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0919 22:50:03.122432    4161 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0919 22:50:03.122449    4161 cli_runner.go:164] Run: docker network inspect existing-network
W0919 22:50:03.138418    4161 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0919 22:50:03.138449    4161 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0919 22:50:03.138465    4161 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0919 22:50:03.138560    4161 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 22:50:03.156875    4161 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c8bbc8f27afe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:42:b1:c5:de:c1} reservation:<nil>}
I0919 22:50:03.157225    4161 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000a78e60}
I0919 22:50:03.157258    4161 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0919 22:50:03.157311    4161 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0919 22:50:03.219487    4161 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-513009 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-513009 --network=existing-network: (33.963141598s)
helpers_test.go:175: Cleaning up "existing-network-513009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-513009
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-513009: (1.93999684s)
I0919 22:50:39.139346    4161 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.06s)

                                                
                                    
x
+
TestKicCustomSubnet (37.42s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-131341 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-131341 --subnet=192.168.60.0/24: (35.235212864s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-131341 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-131341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-131341
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-131341: (2.152639404s)
--- PASS: TestKicCustomSubnet (37.42s)

                                                
                                    
x
+
TestKicStaticIP (33.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-983901 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-983901 --static-ip=192.168.200.200: (31.396073097s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-983901 ip
helpers_test.go:175: Cleaning up "static-ip-983901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-983901
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-983901: (2.103758912s)
--- PASS: TestKicStaticIP (33.66s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-442165 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-442165 --driver=docker  --container-runtime=crio: (30.912769282s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-444848 --driver=docker  --container-runtime=crio
E0919 22:52:40.669887    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-444848 --driver=docker  --container-runtime=crio: (32.509181537s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-442165
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-444848
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-444848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-444848
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-444848: (2.010270161s)
helpers_test.go:175: Cleaning up "first-442165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-442165
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-442165: (1.920473797s)
--- PASS: TestMinikubeProfile (68.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-189797 --memory=3072 --mount-string /tmp/TestMountStartserial2384830634/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-189797 --memory=3072 --mount-string /tmp/TestMountStartserial2384830634/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.214969004s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-189797 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-192078 --memory=3072 --mount-string /tmp/TestMountStartserial2384830634/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-192078 --memory=3072 --mount-string /tmp/TestMountStartserial2384830634/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.833263552s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-192078 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-189797 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-189797 --alsologtostderr -v=5: (1.622805258s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-192078 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-192078
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-192078: (1.205309099s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-192078
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-192078: (7.233092773s)
--- PASS: TestMountStart/serial/RestartStopped (8.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-192078 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-511657 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0919 22:54:56.899206    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-511657 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m13.311766101s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- rollout status deployment/busybox
E0919 22:55:43.733940    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-511657 -- rollout status deployment/busybox: (4.35925921s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- exec busybox-7b57f96db7-g7t8g -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- exec busybox-7b57f96db7-vnndc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- exec busybox-7b57f96db7-g7t8g -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- exec busybox-7b57f96db7-vnndc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- exec busybox-7b57f96db7-g7t8g -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- exec busybox-7b57f96db7-vnndc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- exec busybox-7b57f96db7-g7t8g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- exec busybox-7b57f96db7-g7t8g -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- exec busybox-7b57f96db7-vnndc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-511657 -- exec busybox-7b57f96db7-vnndc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-511657 -v=5 --alsologtostderr
E0919 22:56:19.970426    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-511657 -v=5 --alsologtostderr: (57.327650936s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.04s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-511657 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp testdata/cp-test.txt multinode-511657:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp multinode-511657:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2452208031/001/cp-test_multinode-511657.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp multinode-511657:/home/docker/cp-test.txt multinode-511657-m02:/home/docker/cp-test_multinode-511657_multinode-511657-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m02 "sudo cat /home/docker/cp-test_multinode-511657_multinode-511657-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp multinode-511657:/home/docker/cp-test.txt multinode-511657-m03:/home/docker/cp-test_multinode-511657_multinode-511657-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m03 "sudo cat /home/docker/cp-test_multinode-511657_multinode-511657-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp testdata/cp-test.txt multinode-511657-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp multinode-511657-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2452208031/001/cp-test_multinode-511657-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp multinode-511657-m02:/home/docker/cp-test.txt multinode-511657:/home/docker/cp-test_multinode-511657-m02_multinode-511657.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657 "sudo cat /home/docker/cp-test_multinode-511657-m02_multinode-511657.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp multinode-511657-m02:/home/docker/cp-test.txt multinode-511657-m03:/home/docker/cp-test_multinode-511657-m02_multinode-511657-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m03 "sudo cat /home/docker/cp-test_multinode-511657-m02_multinode-511657-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp testdata/cp-test.txt multinode-511657-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp multinode-511657-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2452208031/001/cp-test_multinode-511657-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp multinode-511657-m03:/home/docker/cp-test.txt multinode-511657:/home/docker/cp-test_multinode-511657-m03_multinode-511657.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657 "sudo cat /home/docker/cp-test_multinode-511657-m03_multinode-511657.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 cp multinode-511657-m03:/home/docker/cp-test.txt multinode-511657-m02:/home/docker/cp-test_multinode-511657-m03_multinode-511657-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 ssh -n multinode-511657-m02 "sudo cat /home/docker/cp-test_multinode-511657-m03_multinode-511657-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-511657 node stop m03: (1.307166512s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-511657 status: exit status 7 (549.857095ms)

                                                
                                                
-- stdout --
	multinode-511657
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-511657-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-511657-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-511657 status --alsologtostderr: exit status 7 (539.262163ms)

                                                
                                                
-- stdout --
	multinode-511657
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-511657-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-511657-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:57:00.897617  119839 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:57:00.898085  119839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:57:00.898121  119839 out.go:374] Setting ErrFile to fd 2...
	I0919 22:57:00.898140  119839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:57:00.898457  119839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
	I0919 22:57:00.898683  119839 out.go:368] Setting JSON to false
	I0919 22:57:00.898757  119839 mustload.go:65] Loading cluster: multinode-511657
	I0919 22:57:00.899177  119839 config.go:182] Loaded profile config "multinode-511657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:57:00.899224  119839 status.go:174] checking status of multinode-511657 ...
	I0919 22:57:00.899726  119839 cli_runner.go:164] Run: docker container inspect multinode-511657 --format={{.State.Status}}
	I0919 22:57:00.899793  119839 notify.go:220] Checking for updates...
	I0919 22:57:00.923764  119839 status.go:371] multinode-511657 host status = "Running" (err=<nil>)
	I0919 22:57:00.923790  119839 host.go:66] Checking if "multinode-511657" exists ...
	I0919 22:57:00.924114  119839 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-511657
	I0919 22:57:00.948180  119839 host.go:66] Checking if "multinode-511657" exists ...
	I0919 22:57:00.948623  119839 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:57:00.948688  119839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-511657
	I0919 22:57:00.968667  119839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/multinode-511657/id_rsa Username:docker}
	I0919 22:57:01.068493  119839 ssh_runner.go:195] Run: systemctl --version
	I0919 22:57:01.072953  119839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:57:01.084322  119839 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:57:01.153570  119839 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-19 22:57:01.143137743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 22:57:01.154106  119839 kubeconfig.go:125] found "multinode-511657" server: "https://192.168.67.2:8443"
	I0919 22:57:01.154152  119839 api_server.go:166] Checking apiserver status ...
	I0919 22:57:01.154207  119839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:57:01.166174  119839 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup
	I0919 22:57:01.176554  119839 api_server.go:182] apiserver freezer: "10:freezer:/docker/ce00463125ae62ca6e4187fde7d4b40c5a6d124b0e1ed602edb2863d7cb9216e/crio/crio-7a030302c0a277c47296742040b36c9eaa9d546401cc9aa8b8908276f7247ea4"
	I0919 22:57:01.176638  119839 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ce00463125ae62ca6e4187fde7d4b40c5a6d124b0e1ed602edb2863d7cb9216e/crio/crio-7a030302c0a277c47296742040b36c9eaa9d546401cc9aa8b8908276f7247ea4/freezer.state
	I0919 22:57:01.186936  119839 api_server.go:204] freezer state: "THAWED"
	I0919 22:57:01.186966  119839 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0919 22:57:01.195887  119839 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0919 22:57:01.195919  119839 status.go:463] multinode-511657 apiserver status = Running (err=<nil>)
	I0919 22:57:01.195931  119839 status.go:176] multinode-511657 status: &{Name:multinode-511657 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:57:01.195978  119839 status.go:174] checking status of multinode-511657-m02 ...
	I0919 22:57:01.196307  119839 cli_runner.go:164] Run: docker container inspect multinode-511657-m02 --format={{.State.Status}}
	I0919 22:57:01.214230  119839 status.go:371] multinode-511657-m02 host status = "Running" (err=<nil>)
	I0919 22:57:01.214253  119839 host.go:66] Checking if "multinode-511657-m02" exists ...
	I0919 22:57:01.214639  119839 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-511657-m02
	I0919 22:57:01.233117  119839 host.go:66] Checking if "multinode-511657-m02" exists ...
	I0919 22:57:01.233427  119839 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:57:01.233464  119839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-511657-m02
	I0919 22:57:01.252858  119839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/21594-2355/.minikube/machines/multinode-511657-m02/id_rsa Username:docker}
	I0919 22:57:01.352187  119839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:57:01.364765  119839 status.go:176] multinode-511657-m02 status: &{Name:multinode-511657-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:57:01.364801  119839 status.go:174] checking status of multinode-511657-m03 ...
	I0919 22:57:01.365138  119839 cli_runner.go:164] Run: docker container inspect multinode-511657-m03 --format={{.State.Status}}
	I0919 22:57:01.383386  119839 status.go:371] multinode-511657-m03 host status = "Stopped" (err=<nil>)
	I0919 22:57:01.383414  119839 status.go:384] host is not running, skipping remaining checks
	I0919 22:57:01.383421  119839 status.go:176] multinode-511657-m03 status: &{Name:multinode-511657-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-511657 node start m03 -v=5 --alsologtostderr: (7.62701438s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-511657
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-511657
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-511657: (24.798500588s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-511657 --wait=true -v=5 --alsologtostderr
E0919 22:57:40.667305    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-511657 --wait=true -v=5 --alsologtostderr: (56.246981463s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-511657
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-511657 node delete m03: (4.873331894s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-511657 stop: (23.664234783s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-511657 status: exit status 7 (176.697375ms)

                                                
                                                
-- stdout --
	multinode-511657
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-511657-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-511657 status --alsologtostderr: exit status 7 (110.468614ms)

                                                
                                                
-- stdout --
	multinode-511657
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-511657-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:59:00.388673  127759 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:59:00.388867  127759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:59:00.388877  127759 out.go:374] Setting ErrFile to fd 2...
	I0919 22:59:00.388883  127759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:59:00.389158  127759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
	I0919 22:59:00.389374  127759 out.go:368] Setting JSON to false
	I0919 22:59:00.389401  127759 mustload.go:65] Loading cluster: multinode-511657
	I0919 22:59:00.389465  127759 notify.go:220] Checking for updates...
	I0919 22:59:00.389851  127759 config.go:182] Loaded profile config "multinode-511657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:59:00.389878  127759 status.go:174] checking status of multinode-511657 ...
	I0919 22:59:00.390440  127759 cli_runner.go:164] Run: docker container inspect multinode-511657 --format={{.State.Status}}
	I0919 22:59:00.412453  127759 status.go:371] multinode-511657 host status = "Stopped" (err=<nil>)
	I0919 22:59:00.412478  127759 status.go:384] host is not running, skipping remaining checks
	I0919 22:59:00.412485  127759 status.go:176] multinode-511657 status: &{Name:multinode-511657 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:59:00.412522  127759 status.go:174] checking status of multinode-511657-m02 ...
	I0919 22:59:00.412894  127759 cli_runner.go:164] Run: docker container inspect multinode-511657-m02 --format={{.State.Status}}
	I0919 22:59:00.443822  127759 status.go:371] multinode-511657-m02 host status = "Stopped" (err=<nil>)
	I0919 22:59:00.443880  127759 status.go:384] host is not running, skipping remaining checks
	I0919 22:59:00.443888  127759 status.go:176] multinode-511657-m02 status: &{Name:multinode-511657-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-511657 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-511657 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.40416557s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-511657 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.06s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-511657
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-511657-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-511657-m02 --driver=docker  --container-runtime=crio: exit status 14 (90.813787ms)

                                                
                                                
-- stdout --
	* [multinode-511657-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-511657-m02' is duplicated with machine name 'multinode-511657-m02' in profile 'multinode-511657'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-511657-m03 --driver=docker  --container-runtime=crio
E0919 22:59:56.898626    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-511657-m03 --driver=docker  --container-runtime=crio: (35.329487045s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-511657
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-511657: exit status 80 (343.284747ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-511657 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-511657-m03 already exists in multinode-511657-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-511657-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-511657-m03: (1.974584957s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.79s)

                                                
                                    
x
+
TestPreload (127.24s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-939093 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-939093 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (59.551765514s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-939093 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-939093 image pull gcr.io/k8s-minikube/busybox: (3.784728907s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-939093
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-939093: (5.825571038s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-939093 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-939093 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (55.496925174s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-939093 image list
helpers_test.go:175: Cleaning up "test-preload-939093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-939093
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-939093: (2.343195061s)
--- PASS: TestPreload (127.24s)

                                                
                                    
x
+
TestScheduledStopUnix (108.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-395309 --memory=3072 --driver=docker  --container-runtime=crio
E0919 23:02:40.667731    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-395309 --memory=3072 --driver=docker  --container-runtime=crio: (31.582916174s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-395309 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-395309 -n scheduled-stop-395309
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-395309 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0919 23:03:10.683162    4161 retry.go:31] will retry after 81.57µs: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.683298    4161 retry.go:31] will retry after 198.198µs: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.683597    4161 retry.go:31] will retry after 147.176µs: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.683846    4161 retry.go:31] will retry after 202.336µs: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.684432    4161 retry.go:31] will retry after 263.868µs: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.685563    4161 retry.go:31] will retry after 915.616µs: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.686647    4161 retry.go:31] will retry after 746.561µs: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.687728    4161 retry.go:31] will retry after 1.951593ms: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.689927    4161 retry.go:31] will retry after 3.223205ms: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.694102    4161 retry.go:31] will retry after 3.07121ms: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.697692    4161 retry.go:31] will retry after 4.465371ms: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.702939    4161 retry.go:31] will retry after 9.395674ms: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.713123    4161 retry.go:31] will retry after 14.472722ms: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.728301    4161 retry.go:31] will retry after 11.911711ms: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.740538    4161 retry.go:31] will retry after 36.746938ms: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
I0919 23:03:10.777804    4161 retry.go:31] will retry after 57.125829ms: open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/scheduled-stop-395309/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-395309 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-395309 -n scheduled-stop-395309
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-395309
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-395309 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-395309
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-395309: exit status 7 (66.91698ms)

                                                
                                                
-- stdout --
	scheduled-stop-395309
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-395309 -n scheduled-stop-395309
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-395309 -n scheduled-stop-395309: exit status 7 (71.357367ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-395309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-395309
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-395309: (4.918617946s)
--- PASS: TestScheduledStopUnix (108.15s)

                                                
                                    
x
+
TestInsufficientStorage (10.5s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-663183 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-663183 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.034673278s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"57347431-2af4-4286-ab1c-df9c50e05a34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-663183] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d32ed06-d8fa-47b4-893d-11536ca89f3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21594"}}
	{"specversion":"1.0","id":"e532246e-cdad-41ee-9d5d-6ae9071ba173","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"74a4f0dd-ff7e-4642-9783-bb10adc4cf94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig"}}
	{"specversion":"1.0","id":"381ba21a-7e2f-46c9-8c45-b4c91bcae20f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube"}}
	{"specversion":"1.0","id":"422afa48-3ed9-497c-a256-c270a8d04749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ada2e783-5faf-4e99-ac3e-e1017cc5fb90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c3e15821-874b-4f58-aca9-b912b56fa149","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4501138f-4d0e-4638-a088-c79acb83b15c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0f8db772-dde5-4af9-bc2b-2cbddbba8cab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"789718b2-f31d-4c03-a077-dd35e9003790","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e625f9b0-6eea-46e7-a5fc-3bb0e58a757e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-663183\" primary control-plane node in \"insufficient-storage-663183\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"92c28f59-f42c-4ae2-8cc6-c5b7f9cad1d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3dc5154-3798-4652-90b4-7925e0a172ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"25b81498-f237-40e1-b55a-77fc425bcb4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-663183 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-663183 --output=json --layout=cluster: exit status 7 (288.92699ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-663183","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-663183","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 23:04:35.034912  145109 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-663183" does not appear in /home/jenkins/minikube-integration/21594-2355/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-663183 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-663183 --output=json --layout=cluster: exit status 7 (285.226321ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-663183","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-663183","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 23:04:35.319093  145171 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-663183" does not appear in /home/jenkins/minikube-integration/21594-2355/kubeconfig
	E0919 23:04:35.329766  145171 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/insufficient-storage-663183/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-663183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-663183
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-663183: (1.89171477s)
--- PASS: TestInsufficientStorage (10.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3036587161 start -p running-upgrade-936746 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3036587161 start -p running-upgrade-936746 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.894658954s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-936746 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-936746 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.430610548s)
helpers_test.go:175: Cleaning up "running-upgrade-936746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-936746
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-936746: (2.048077673s)
--- PASS: TestRunningBinaryUpgrade (53.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (360.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-955082 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-955082 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.36970023s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-955082
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-955082: (1.282089003s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-955082 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-955082 status --format={{.Host}}: exit status 7 (94.302016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-955082 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-955082 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.253438974s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-955082 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-955082 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-955082 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (124.261117ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-955082] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-955082
	    minikube start -p kubernetes-upgrade-955082 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9550822 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-955082 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-955082 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-955082 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.650883265s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-955082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-955082
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-955082: (3.214538453s)
--- PASS: TestKubernetesUpgrade (360.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (117.53s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3899367033 start -p missing-upgrade-028289 --memory=3072 --driver=docker  --container-runtime=crio
E0919 23:04:56.898986    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3899367033 start -p missing-upgrade-028289 --memory=3072 --driver=docker  --container-runtime=crio: (1m2.701528496s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-028289
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-028289
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-028289 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-028289 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.871446879s)
helpers_test.go:175: Cleaning up "missing-upgrade-028289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-028289
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-028289: (2.356328011s)
--- PASS: TestMissingContainerUpgrade (117.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-442555 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-442555 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (98.572078ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-442555] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-442555 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-442555 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.485062023s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-442555 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-442555 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-442555 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.175581251s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-442555 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-442555 status -o json: exit status 2 (300.194473ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-442555","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-442555
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-442555: (1.972918803s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-442555 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-442555 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.650328902s)
--- PASS: TestNoKubernetes/serial/Start (5.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-442555 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-442555 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.69072ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-442555
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-442555: (1.21498112s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-442555 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-442555 --driver=docker  --container-runtime=crio: (8.137003781s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-442555 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-442555 "sudo systemctl is-active --quiet service kubelet": exit status 1 (471.772606ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (61.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.776889570 start -p stopped-upgrade-866054 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.776889570 start -p stopped-upgrade-866054 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.00150609s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.776889570 -p stopped-upgrade-866054 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.776889570 -p stopped-upgrade-866054 stop: (1.216491447s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-866054 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-866054 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.734468867s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (61.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-866054
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-866054: (1.179178034s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestPause/serial/Start (79.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-555234 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-555234 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m19.318264528s)
--- PASS: TestPause/serial/Start (79.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.77s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-555234 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0919 23:09:56.899120    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-555234 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.747700863s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.77s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-555234 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-555234 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-555234 --output=json --layout=cluster: exit status 2 (353.107776ms)

                                                
                                                
-- stdout --
	{"Name":"pause-555234","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-555234","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-555234 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-555234 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.66s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-555234 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-555234 --alsologtostderr -v=5: (2.655533077s)
--- PASS: TestPause/serial/DeletePaused (2.66s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-555234
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-555234: exit status 1 (20.475841ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-555234: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-853693 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-853693 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (193.21549ms)

                                                
                                                
-- stdout --
	* [false-853693] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:11:07.023554  182459 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:11:07.023719  182459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:11:07.023730  182459 out.go:374] Setting ErrFile to fd 2...
	I0919 23:11:07.023735  182459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:11:07.024001  182459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-2355/.minikube/bin
	I0919 23:11:07.024403  182459 out.go:368] Setting JSON to false
	I0919 23:11:07.025221  182459 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6818,"bootTime":1758316649,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 23:11:07.025315  182459 start.go:140] virtualization:  
	I0919 23:11:07.029266  182459 out.go:179] * [false-853693] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0919 23:11:07.033173  182459 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:11:07.033280  182459 notify.go:220] Checking for updates...
	I0919 23:11:07.039065  182459 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:11:07.042004  182459 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-2355/kubeconfig
	I0919 23:11:07.044886  182459 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-2355/.minikube
	I0919 23:11:07.047759  182459 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 23:11:07.050669  182459 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:11:07.054686  182459 config.go:182] Loaded profile config "kubernetes-upgrade-955082": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:11:07.054850  182459 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:11:07.082827  182459 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0919 23:11:07.082951  182459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:11:07.145508  182459 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-19 23:11:07.135967225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0919 23:11:07.145617  182459 docker.go:318] overlay module found
	I0919 23:11:07.149244  182459 out.go:179] * Using the docker driver based on user configuration
	I0919 23:11:07.152069  182459 start.go:304] selected driver: docker
	I0919 23:11:07.152091  182459 start.go:918] validating driver "docker" against <nil>
	I0919 23:11:07.152106  182459 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:11:07.155669  182459 out.go:203] 
	W0919 23:11:07.158604  182459 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0919 23:11:07.161634  182459 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-853693 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-853693" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-853693" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-2355/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:07:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-955082
contexts:
- context:
cluster: kubernetes-upgrade-955082
user: kubernetes-upgrade-955082
name: kubernetes-upgrade-955082
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-955082
user:
client-certificate: /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/kubernetes-upgrade-955082/client.crt
client-key: /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/kubernetes-upgrade-955082/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-853693

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-853693"

                                                
                                                
----------------------- debugLogs end: false-853693 [took: 3.667270337s] --------------------------------
helpers_test.go:175: Cleaning up "false-853693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-853693
--- PASS: TestNetworkPlugins/group/false (4.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-834364 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0919 23:12:59.972429    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-834364 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.961428184s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-834364 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [983a11b4-e70d-4de3-a11f-9b2fdd8fcdfc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [983a11b4-e70d-4de3-a11f-9b2fdd8fcdfc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.013322363s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-834364 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-834364 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-834364 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.107651854s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-834364 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-834364 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-834364 --alsologtostderr -v=3: (11.97355068s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-834364 -n old-k8s-version-834364
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-834364 -n old-k8s-version-834364: exit status 7 (68.814887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-834364 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-834364 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0919 23:14:56.898862    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-834364 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (55.224519006s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-834364 -n old-k8s-version-834364
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6drg5" [5ff3d342-1360-44c4-98b5-8a91c44cf6b4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005036696s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6drg5" [5ff3d342-1360-44c4-98b5-8a91c44cf6b4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004113167s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-834364 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-834364 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-834364 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-834364 -n old-k8s-version-834364
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-834364 -n old-k8s-version-834364: exit status 2 (335.956211ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-834364 -n old-k8s-version-834364
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-834364 -n old-k8s-version-834364: exit status 2 (328.426721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-834364 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-834364 -n old-k8s-version-834364
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-834364 -n old-k8s-version-834364
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-167087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-167087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m12.57864557s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-913449 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-913449 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m20.637950681s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-167087 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c1ad03f5-7b46-410c-a657-40651da89cf8] Pending
helpers_test.go:352: "busybox" [c1ad03f5-7b46-410c-a657-40651da89cf8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c1ad03f5-7b46-410c-a657-40651da89cf8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003810504s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-167087 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-167087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-167087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.258517453s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-167087 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-167087 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-167087 --alsologtostderr -v=3: (12.124970609s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-167087 -n no-preload-167087
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-167087 -n no-preload-167087: exit status 7 (72.062549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-167087 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-167087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-167087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (55.055785048s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-167087 -n no-preload-167087
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-913449 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [53e530b6-3ee2-44e2-a16a-8adaf366a70a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0919 23:17:40.667409    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [53e530b6-3ee2-44e2-a16a-8adaf366a70a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.002748246s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-913449 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-913449 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-913449 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.068023423s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-913449 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-913449 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-913449 --alsologtostderr -v=3: (11.962523437s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rj65b" [4adf1237-a006-4696-b054-c4de358870ba] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003041274s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-913449 -n embed-certs-913449
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-913449 -n embed-certs-913449: exit status 7 (79.436403ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-913449 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-913449 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-913449 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (52.34034625s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-913449 -n embed-certs-913449
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rj65b" [4adf1237-a006-4696-b054-c4de358870ba] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003653182s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-167087 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-167087 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-167087 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-167087 --alsologtostderr -v=1: (1.342238759s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-167087 -n no-preload-167087
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-167087 -n no-preload-167087: exit status 2 (508.805839ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-167087 -n no-preload-167087
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-167087 -n no-preload-167087: exit status 2 (476.946699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-167087 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-167087 --alsologtostderr -v=1: (1.046819704s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-167087 -n no-preload-167087
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-167087 -n no-preload-167087
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-931000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0919 23:18:50.264338    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:18:50.270653    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:18:50.281967    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:18:50.303616    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:18:50.344937    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:18:50.426314    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:18:50.587737    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:18:50.909506    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:18:51.551482    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:18:52.833086    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-931000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m23.638711424s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lhtk8" [24e039d1-7e43-4434-92ca-e62114f8ac0f] Running
E0919 23:18:55.395041    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:19:00.516693    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003872239s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lhtk8" [24e039d1-7e43-4434-92ca-e62114f8ac0f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003375615s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-913449 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-913449 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-913449 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-913449 -n embed-certs-913449
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-913449 -n embed-certs-913449: exit status 2 (320.498051ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-913449 -n embed-certs-913449
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-913449 -n embed-certs-913449: exit status 2 (320.484553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-913449 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-913449 -n embed-certs-913449
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-913449 -n embed-certs-913449
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-022845 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0919 23:19:31.240793    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-022845 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (35.197167696s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-931000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0f1a838d-5615-4ddc-a872-e44f4dabef3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0f1a838d-5615-4ddc-a872-e44f4dabef3c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.002972229s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-931000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-022845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-022845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.046504235s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-022845 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-022845 --alsologtostderr -v=3: (1.265376776s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-022845 -n newest-cni-022845
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-022845 -n newest-cni-022845: exit status 7 (71.146297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-022845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-022845 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-022845 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (16.786284744s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-022845 -n newest-cni-022845
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-931000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-931000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-931000 --alsologtostderr -v=3
E0919 23:19:56.900215    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/functional-995015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-931000 --alsologtostderr -v=3: (12.012476588s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000: exit status 7 (71.369898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-931000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-931000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-931000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (58.348571751s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-022845 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-022845 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-022845 -n newest-cni-022845
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-022845 -n newest-cni-022845: exit status 2 (321.037671ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-022845 -n newest-cni-022845
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-022845 -n newest-cni-022845: exit status 2 (313.830794ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-022845 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-022845 -n newest-cni-022845
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-022845 -n newest-cni-022845
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m24.844913332s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hx7pr" [361adbd4-df9e-4553-b77f-1ca27e4e50b2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003819631s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hx7pr" [361adbd4-df9e-4553-b77f-1ca27e4e50b2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003971009s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-931000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-931000 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-931000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000: exit status 2 (312.652704ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000: exit status 2 (337.671791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-931000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)
E0919 23:27:00.533439    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/auto-853693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:27:07.585619    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:27:21.014870    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/auto-853693/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0919 23:21:34.124413    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m21.650809415s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-853693 "pgrep -a kubelet"
I0919 23:21:39.689914    4161 config.go:182] Loaded profile config "auto-853693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-853693 replace --force -f testdata/netcat-deployment.yaml
E0919 23:21:39.875956    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:39.882292    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:39.893950    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:39.918352    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:39.961256    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:40.043247    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6pgjc" [07fd042a-18f4-449e-8060-0cfee896c1f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 23:21:40.205024    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:40.526250    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:41.168002    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:42.449448    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:45.012056    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-6pgjc" [07fd042a-18f4-449e-8060-0cfee896c1f3] Running
E0919 23:21:50.134333    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.00431789s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-853693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0919 23:22:20.860821    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:40.667234    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/addons-497709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.119371509s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-82x9s" [0c0de043-16bb-4caf-8872-ebfa70d059b3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003410983s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-853693 "pgrep -a kubelet"
I0919 23:22:51.896980    4161 config.go:182] Loaded profile config "kindnet-853693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-853693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zpwxm" [dd961f93-d461-47b6-80e8-b82e985749d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zpwxm" [dd961f93-d461-47b6-80e8-b82e985749d9] Running
E0919 23:23:01.822432    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00358752s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-853693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-wx9sj" [e3b77567-70e6-4320-8600-24f66eafacfc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004201035s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-853693 "pgrep -a kubelet"
I0919 23:23:24.977358    4161 config.go:182] Loaded profile config "calico-853693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-853693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2gk7n" [76b3c198-573d-4f7b-8c4e-3d120e30a506] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2gk7n" [76b3c198-573d-4f7b-8c4e-3d120e30a506] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003679389s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m7.857870783s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-853693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0919 23:24:17.966222    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/old-k8s-version-834364/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:23.743763    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/no-preload-167087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m18.135968638s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-853693 "pgrep -a kubelet"
I0919 23:24:34.595434    4161 config.go:182] Loaded profile config "custom-flannel-853693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-853693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dh5g4" [8b3f86ef-05bc-43ba-8d6d-ec4606f0308a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dh5g4" [8b3f86ef-05bc-43ba-8d6d-ec4606f0308a] Running
E0919 23:24:42.989772    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:42.996195    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:43.007621    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:43.029285    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:43.070666    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:43.152143    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:43.313794    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:43.635460    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:44.277221    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:45.558828    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00348985s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-853693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.11295506s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-853693 "pgrep -a kubelet"
I0919 23:25:23.452341    4161 config.go:182] Loaded profile config "enable-default-cni-853693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-853693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gw2rb" [703c7bcd-70e1-4168-a943-cb6793f1a91c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 23:25:23.967340    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-gw2rb" [703c7bcd-70e1-4168-a943-cb6793f1a91c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.003179582s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-853693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0919 23:26:04.929029    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-853693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m23.284550802s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-m248p" [4d76fb08-b1d6-4dc9-af29-6ff0e39a4eed] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003534355s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-853693 "pgrep -a kubelet"
I0919 23:26:16.567029    4161 config.go:182] Loaded profile config "flannel-853693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-853693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-trxv7" [f7e106cb-015f-4870-9aef-9eabaa5b89fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-trxv7" [f7e106cb-015f-4870-9aef-9eabaa5b89fa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004488792s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-853693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-853693 "pgrep -a kubelet"
I0919 23:27:23.692712    4161 config.go:182] Loaded profile config "bridge-853693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-853693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qgdkd" [46cbe048-8f1b-41ab-99c9-e21f38a2c5ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 23:27:26.850449    4161 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/default-k8s-diff-port-931000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qgdkd" [46cbe048-8f1b-41ab-99c9-e21f38a2c5ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.008818007s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-853693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-853693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (32/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-334793 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-334793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-334793
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-497709 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-552400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-552400
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-853693 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-853693" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-853693" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-2355/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:07:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-955082
contexts:
- context:
cluster: kubernetes-upgrade-955082
user: kubernetes-upgrade-955082
name: kubernetes-upgrade-955082
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-955082
user:
client-certificate: /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/kubernetes-upgrade-955082/client.crt
client-key: /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/kubernetes-upgrade-955082/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-853693

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-853693"

                                                
                                                
----------------------- debugLogs end: kubenet-853693 [took: 3.528918085s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-853693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-853693
--- SKIP: TestNetworkPlugins/group/kubenet (3.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-853693 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-853693" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-2355/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:07:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-955082
contexts:
- context:
cluster: kubernetes-upgrade-955082
user: kubernetes-upgrade-955082
name: kubernetes-upgrade-955082
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-955082
user:
client-certificate: /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/kubernetes-upgrade-955082/client.crt
client-key: /home/jenkins/minikube-integration/21594-2355/.minikube/profiles/kubernetes-upgrade-955082/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-853693

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-853693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-853693"

                                                
                                                
----------------------- debugLogs end: cilium-853693 [took: 5.008635283s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-853693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-853693
--- SKIP: TestNetworkPlugins/group/cilium (5.30s)

                                                
                                    
Copied to clipboard