Test Report: Docker_Linux_crio_arm64 21512

                    
                      67b6671f4b7f755dd397ae36ae992d15d1f5bc42:2025-09-08:41332
                    
                

Test fail (6/332)

Order failed test Duration
37 TestAddons/parallel/Ingress 153.38
98 TestFunctional/parallel/ServiceCmdConnect 603.84
144 TestFunctional/parallel/ServiceCmd/DeployApp 601
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
154 TestFunctional/parallel/ServiceCmd/Format 0.55
155 TestFunctional/parallel/ServiceCmd/URL 0.56
x
+
TestAddons/parallel/Ingress (153.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-953262 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-953262 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-953262 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [aeda3d65-6d40-4615-9892-2a54340e2467] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [aeda3d65-6d40-4615-9892-2a54340e2467] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004165411s
I0908 11:23:08.088540  295113 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-953262 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.416086648s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-953262 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-953262
helpers_test.go:243: (dbg) docker inspect addons-953262:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bce9f06c38431f433d7edcefeaac37bf9c8eb489d6b96707da46b76779f49221",
	        "Created": "2025-09-08T11:18:21.6613643Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296278,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T11:18:21.704027465Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/bce9f06c38431f433d7edcefeaac37bf9c8eb489d6b96707da46b76779f49221/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bce9f06c38431f433d7edcefeaac37bf9c8eb489d6b96707da46b76779f49221/hostname",
	        "HostsPath": "/var/lib/docker/containers/bce9f06c38431f433d7edcefeaac37bf9c8eb489d6b96707da46b76779f49221/hosts",
	        "LogPath": "/var/lib/docker/containers/bce9f06c38431f433d7edcefeaac37bf9c8eb489d6b96707da46b76779f49221/bce9f06c38431f433d7edcefeaac37bf9c8eb489d6b96707da46b76779f49221-json.log",
	        "Name": "/addons-953262",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-953262:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-953262",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bce9f06c38431f433d7edcefeaac37bf9c8eb489d6b96707da46b76779f49221",
	                "LowerDir": "/var/lib/docker/overlay2/af1f7d665ac614694e8e83c8b97e2767aa9fc39bba7c2bea76982756478fc04d-init/diff:/var/lib/docker/overlay2/12fba0b2ee9605b82319300b6c0948dcd651b92089cc7fe5af71d16143e72a6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af1f7d665ac614694e8e83c8b97e2767aa9fc39bba7c2bea76982756478fc04d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af1f7d665ac614694e8e83c8b97e2767aa9fc39bba7c2bea76982756478fc04d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af1f7d665ac614694e8e83c8b97e2767aa9fc39bba7c2bea76982756478fc04d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-953262",
	                "Source": "/var/lib/docker/volumes/addons-953262/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-953262",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-953262",
	                "name.minikube.sigs.k8s.io": "addons-953262",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "51d1fe00fb2fa95010f60a7729423ab2af4a763d2ca64d2d7e1582063d77bb75",
	            "SandboxKey": "/var/run/docker/netns/51d1fe00fb2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-953262": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:d1:a3:0c:75:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6348356f214318d7b74d55ddf75f576e15b66b27bc181fcd4da56da67dd022d",
	                    "EndpointID": "f11d18b98261d7cb2d5bec5724271ef8de72d5dafafb483e932d39dd0a0ffcb8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-953262",
	                        "bce9f06c3843"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-953262 -n addons-953262
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-953262 logs -n 25: (1.782329794s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-212564                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-212564 │ jenkins │ v1.36.0 │ 08 Sep 25 11:17 UTC │ 08 Sep 25 11:17 UTC │
	│ start   │ --download-only -p binary-mirror-679593 --alsologtostderr --binary-mirror http://127.0.0.1:37151 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-679593   │ jenkins │ v1.36.0 │ 08 Sep 25 11:17 UTC │                     │
	│ delete  │ -p binary-mirror-679593                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-679593   │ jenkins │ v1.36.0 │ 08 Sep 25 11:17 UTC │ 08 Sep 25 11:17 UTC │
	│ addons  │ disable dashboard -p addons-953262                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:17 UTC │                     │
	│ addons  │ enable dashboard -p addons-953262                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:17 UTC │                     │
	│ start   │ -p addons-953262 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:17 UTC │ 08 Sep 25 11:21 UTC │
	│ addons  │ addons-953262 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:21 UTC │ 08 Sep 25 11:21 UTC │
	│ addons  │ addons-953262 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:21 UTC │ 08 Sep 25 11:21 UTC │
	│ addons  │ enable headlamp -p addons-953262 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:21 UTC │ 08 Sep 25 11:21 UTC │
	│ ip      │ addons-953262 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:21 UTC │ 08 Sep 25 11:21 UTC │
	│ addons  │ addons-953262 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:21 UTC │ 08 Sep 25 11:21 UTC │
	│ addons  │ addons-953262 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:21 UTC │ 08 Sep 25 11:22 UTC │
	│ addons  │ addons-953262 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:22 UTC │ 08 Sep 25 11:22 UTC │
	│ addons  │ addons-953262 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:22 UTC │ 08 Sep 25 11:22 UTC │
	│ addons  │ addons-953262 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:22 UTC │ 08 Sep 25 11:22 UTC │
	│ ssh     │ addons-953262 ssh cat /opt/local-path-provisioner/pvc-377fa41e-0785-4a6e-bef9-eaaf481c512a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:22 UTC │ 08 Sep 25 11:22 UTC │
	│ addons  │ addons-953262 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:22 UTC │ 08 Sep 25 11:22 UTC │
	│ addons  │ addons-953262 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:22 UTC │ 08 Sep 25 11:22 UTC │
	│ ssh     │ addons-953262 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:23 UTC │                     │
	│ addons  │ addons-953262 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:23 UTC │ 08 Sep 25 11:23 UTC │
	│ addons  │ addons-953262 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:23 UTC │ 08 Sep 25 11:23 UTC │
	│ addons  │ addons-953262 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:23 UTC │ 08 Sep 25 11:23 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-953262                                                                                                                                                                                                                                                                                                                                                                                           │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:23 UTC │ 08 Sep 25 11:23 UTC │
	│ addons  │ addons-953262 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:23 UTC │ 08 Sep 25 11:23 UTC │
	│ ip      │ addons-953262 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-953262          │ jenkins │ v1.36.0 │ 08 Sep 25 11:25 UTC │ 08 Sep 25 11:25 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:17:55
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:17:55.660389  295873 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:17:55.660533  295873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:17:55.660592  295873 out.go:374] Setting ErrFile to fd 2...
	I0908 11:17:55.660605  295873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:17:55.660893  295873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
	I0908 11:17:55.661396  295873 out.go:368] Setting JSON to false
	I0908 11:17:55.662296  295873 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3628,"bootTime":1757326648,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 11:17:55.662373  295873 start.go:140] virtualization:  
	I0908 11:17:55.667670  295873 out.go:179] * [addons-953262] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 11:17:55.670644  295873 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:17:55.670741  295873 notify.go:220] Checking for updates...
	I0908 11:17:55.676496  295873 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:17:55.679421  295873 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	I0908 11:17:55.682362  295873 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	I0908 11:17:55.685230  295873 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 11:17:55.688489  295873 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:17:55.691675  295873 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:17:55.727448  295873 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:17:55.727596  295873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:17:55.791628  295873 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-08 11:17:55.781989282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 11:17:55.791735  295873 docker.go:318] overlay module found
	I0908 11:17:55.794874  295873 out.go:179] * Using the docker driver based on user configuration
	I0908 11:17:55.797726  295873 start.go:304] selected driver: docker
	I0908 11:17:55.797754  295873 start.go:918] validating driver "docker" against <nil>
	I0908 11:17:55.797769  295873 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:17:55.798560  295873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:17:55.857137  295873 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-08 11:17:55.847821768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 11:17:55.857291  295873 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 11:17:55.857518  295873 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:17:55.860373  295873 out.go:179] * Using Docker driver with root privileges
	I0908 11:17:55.863161  295873 cni.go:84] Creating CNI manager for ""
	I0908 11:17:55.863236  295873 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:17:55.863252  295873 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 11:17:55.863336  295873 start.go:348] cluster config:
	{Name:addons-953262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-953262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0908 11:17:55.866466  295873 out.go:179] * Starting "addons-953262" primary control-plane node in "addons-953262" cluster
	I0908 11:17:55.869335  295873 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 11:17:55.872247  295873 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 11:17:55.875067  295873 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:17:55.875131  295873 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 11:17:55.875145  295873 cache.go:58] Caching tarball of preloaded images
	I0908 11:17:55.875158  295873 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 11:17:55.875241  295873 preload.go:172] Found /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0908 11:17:55.875252  295873 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 11:17:55.875577  295873 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/config.json ...
	I0908 11:17:55.875607  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/config.json: {Name:mk0d8066a0c240fd5ac912de509e5cc901e506fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:17:55.891359  295873 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 11:17:55.891477  295873 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 11:17:55.891504  295873 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 11:17:55.891510  295873 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 11:17:55.891523  295873 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 11:17:55.891529  295873 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from local cache
	I0908 11:18:13.724994  295873 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from cached tarball
	I0908 11:18:13.725030  295873 cache.go:232] Successfully downloaded all kic artifacts
	I0908 11:18:13.725069  295873 start.go:360] acquireMachinesLock for addons-953262: {Name:mkfd17226113d7221bf4ee3d5c04aed68b43ce76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:18:13.725730  295873 start.go:364] duration metric: took 638.211µs to acquireMachinesLock for "addons-953262"
	I0908 11:18:13.725787  295873 start.go:93] Provisioning new machine with config: &{Name:addons-953262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-953262 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:18:13.725871  295873 start.go:125] createHost starting for "" (driver="docker")
	I0908 11:18:13.729316  295873 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0908 11:18:13.729609  295873 start.go:159] libmachine.API.Create for "addons-953262" (driver="docker")
	I0908 11:18:13.729658  295873 client.go:168] LocalClient.Create starting
	I0908 11:18:13.729830  295873 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca.pem
	I0908 11:18:14.305671  295873 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/cert.pem
	I0908 11:18:15.145912  295873 cli_runner.go:164] Run: docker network inspect addons-953262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 11:18:15.165602  295873 cli_runner.go:211] docker network inspect addons-953262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 11:18:15.165683  295873 network_create.go:284] running [docker network inspect addons-953262] to gather additional debugging logs...
	I0908 11:18:15.165703  295873 cli_runner.go:164] Run: docker network inspect addons-953262
	W0908 11:18:15.182641  295873 cli_runner.go:211] docker network inspect addons-953262 returned with exit code 1
	I0908 11:18:15.182675  295873 network_create.go:287] error running [docker network inspect addons-953262]: docker network inspect addons-953262: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-953262 not found
	I0908 11:18:15.182692  295873 network_create.go:289] output of [docker network inspect addons-953262]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-953262 not found
	
	** /stderr **
	I0908 11:18:15.182809  295873 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 11:18:15.199153  295873 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f8cb0}
	I0908 11:18:15.199204  295873 network_create.go:124] attempt to create docker network addons-953262 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0908 11:18:15.199273  295873 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-953262 addons-953262
	I0908 11:18:15.262065  295873 network_create.go:108] docker network addons-953262 192.168.49.0/24 created
	I0908 11:18:15.262100  295873 kic.go:121] calculated static IP "192.168.49.2" for the "addons-953262" container
	I0908 11:18:15.262178  295873 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 11:18:15.277223  295873 cli_runner.go:164] Run: docker volume create addons-953262 --label name.minikube.sigs.k8s.io=addons-953262 --label created_by.minikube.sigs.k8s.io=true
	I0908 11:18:15.295356  295873 oci.go:103] Successfully created a docker volume addons-953262
	I0908 11:18:15.295464  295873 cli_runner.go:164] Run: docker run --rm --name addons-953262-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-953262 --entrypoint /usr/bin/test -v addons-953262:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 11:18:17.348897  295873 cli_runner.go:217] Completed: docker run --rm --name addons-953262-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-953262 --entrypoint /usr/bin/test -v addons-953262:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib: (2.053390071s)
	I0908 11:18:17.348927  295873 oci.go:107] Successfully prepared a docker volume addons-953262
	I0908 11:18:17.348956  295873 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:18:17.348975  295873 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 11:18:17.349047  295873 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-953262:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 11:18:21.594228  295873 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-953262:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.245126055s)
	I0908 11:18:21.594260  295873 kic.go:203] duration metric: took 4.245281618s to extract preloaded images to volume ...
	W0908 11:18:21.594397  295873 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 11:18:21.594505  295873 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 11:18:21.646976  295873 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-953262 --name addons-953262 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-953262 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-953262 --network addons-953262 --ip 192.168.49.2 --volume addons-953262:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 11:18:21.922346  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Running}}
	I0908 11:18:21.941759  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:21.971827  295873 cli_runner.go:164] Run: docker exec addons-953262 stat /var/lib/dpkg/alternatives/iptables
	I0908 11:18:22.030784  295873 oci.go:144] the created container "addons-953262" has a running status.
	I0908 11:18:22.030831  295873 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa...
	I0908 11:18:22.258975  295873 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 11:18:22.285557  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:22.307833  295873 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 11:18:22.307853  295873 kic_runner.go:114] Args: [docker exec --privileged addons-953262 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 11:18:22.385620  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:22.418637  295873 machine.go:93] provisionDockerMachine start ...
	I0908 11:18:22.418756  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:22.441007  295873 main.go:141] libmachine: Using SSH client type: native
	I0908 11:18:22.441328  295873 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0908 11:18:22.441337  295873 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:18:22.441957  295873 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40840->127.0.0.1:33139: read: connection reset by peer
	I0908 11:18:25.577607  295873 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-953262
	
	I0908 11:18:25.577635  295873 ubuntu.go:182] provisioning hostname "addons-953262"
	I0908 11:18:25.577726  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:25.595354  295873 main.go:141] libmachine: Using SSH client type: native
	I0908 11:18:25.595663  295873 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0908 11:18:25.595679  295873 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-953262 && echo "addons-953262" | sudo tee /etc/hostname
	I0908 11:18:25.734248  295873 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-953262
	
	I0908 11:18:25.734342  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:25.752504  295873 main.go:141] libmachine: Using SSH client type: native
	I0908 11:18:25.752818  295873 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0908 11:18:25.752833  295873 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-953262' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-953262/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-953262' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:18:25.878066  295873 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:18:25.878091  295873 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-293252/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-293252/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-293252/.minikube}
	I0908 11:18:25.878126  295873 ubuntu.go:190] setting up certificates
	I0908 11:18:25.878135  295873 provision.go:84] configureAuth start
	I0908 11:18:25.878200  295873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-953262
	I0908 11:18:25.895056  295873 provision.go:143] copyHostCerts
	I0908 11:18:25.895140  295873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-293252/.minikube/key.pem (1675 bytes)
	I0908 11:18:25.895259  295873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-293252/.minikube/ca.pem (1078 bytes)
	I0908 11:18:25.895314  295873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-293252/.minikube/cert.pem (1123 bytes)
	I0908 11:18:25.895359  295873 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-293252/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca-key.pem org=jenkins.addons-953262 san=[127.0.0.1 192.168.49.2 addons-953262 localhost minikube]
	I0908 11:18:26.185399  295873 provision.go:177] copyRemoteCerts
	I0908 11:18:26.185471  295873 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:18:26.185516  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:26.202165  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:26.298664  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 11:18:26.323133  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0908 11:18:26.347046  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 11:18:26.370575  295873 provision.go:87] duration metric: took 492.417318ms to configureAuth
	I0908 11:18:26.370604  295873 ubuntu.go:206] setting minikube options for container-runtime
	I0908 11:18:26.370791  295873 config.go:182] Loaded profile config "addons-953262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:18:26.370902  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:26.387787  295873 main.go:141] libmachine: Using SSH client type: native
	I0908 11:18:26.388122  295873 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0908 11:18:26.388142  295873 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 11:18:26.610578  295873 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 11:18:26.610602  295873 machine.go:96] duration metric: took 4.191945102s to provisionDockerMachine
	I0908 11:18:26.610613  295873 client.go:171] duration metric: took 12.880945232s to LocalClient.Create
	I0908 11:18:26.610628  295873 start.go:167] duration metric: took 12.881019169s to libmachine.API.Create "addons-953262"
	I0908 11:18:26.610636  295873 start.go:293] postStartSetup for "addons-953262" (driver="docker")
	I0908 11:18:26.610646  295873 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:18:26.610715  295873 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:18:26.610769  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:26.627722  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:26.719030  295873 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:18:26.722248  295873 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 11:18:26.722329  295873 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 11:18:26.722346  295873 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 11:18:26.722354  295873 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 11:18:26.722364  295873 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-293252/.minikube/addons for local assets ...
	I0908 11:18:26.722438  295873 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-293252/.minikube/files for local assets ...
	I0908 11:18:26.722465  295873 start.go:296] duration metric: took 111.823449ms for postStartSetup
	I0908 11:18:26.722791  295873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-953262
	I0908 11:18:26.739300  295873 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/config.json ...
	I0908 11:18:26.739593  295873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:18:26.739643  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:26.755648  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:26.842500  295873 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 11:18:26.847077  295873 start.go:128] duration metric: took 13.121188607s to createHost
	I0908 11:18:26.847102  295873 start.go:83] releasing machines lock for "addons-953262", held for 13.121351596s
	I0908 11:18:26.847174  295873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-953262
	I0908 11:18:26.869045  295873 ssh_runner.go:195] Run: cat /version.json
	I0908 11:18:26.869083  295873 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 11:18:26.869099  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:26.869173  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:26.889507  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:26.905077  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:26.981314  295873 ssh_runner.go:195] Run: systemctl --version
	I0908 11:18:27.108282  295873 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 11:18:27.248795  295873 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 11:18:27.253101  295873 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:18:27.274591  295873 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 11:18:27.274749  295873 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:18:27.311023  295873 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 11:18:27.311052  295873 start.go:495] detecting cgroup driver to use...
	I0908 11:18:27.311087  295873 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 11:18:27.311140  295873 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:18:27.327698  295873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:18:27.339348  295873 docker.go:218] disabling cri-docker service (if available) ...
	I0908 11:18:27.339455  295873 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 11:18:27.354094  295873 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 11:18:27.369361  295873 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 11:18:27.460144  295873 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 11:18:27.554850  295873 docker.go:234] disabling docker service ...
	I0908 11:18:27.554921  295873 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 11:18:27.574495  295873 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 11:18:27.587003  295873 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 11:18:27.678212  295873 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 11:18:27.776727  295873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:18:27.788719  295873 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:18:27.804959  295873 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 11:18:27.805051  295873 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:18:27.816104  295873 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 11:18:27.816173  295873 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:18:27.826486  295873 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:18:27.836959  295873 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:18:27.847230  295873 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:18:27.856174  295873 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:18:27.866086  295873 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:18:27.881529  295873 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:18:27.891173  295873 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:18:27.899845  295873 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:18:27.908194  295873 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:27.991869  295873 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 11:18:28.106932  295873 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 11:18:28.107101  295873 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 11:18:28.111596  295873 start.go:563] Will wait 60s for crictl version
	I0908 11:18:28.111718  295873 ssh_runner.go:195] Run: which crictl
	I0908 11:18:28.115600  295873 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:18:28.155708  295873 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 11:18:28.155877  295873 ssh_runner.go:195] Run: crio --version
	I0908 11:18:28.196193  295873 ssh_runner.go:195] Run: crio --version
	I0908 11:18:28.236254  295873 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 11:18:28.239131  295873 cli_runner.go:164] Run: docker network inspect addons-953262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 11:18:28.255517  295873 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 11:18:28.259245  295873 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:18:28.270118  295873 kubeadm.go:875] updating cluster {Name:addons-953262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-953262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:18:28.270235  295873 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:18:28.270300  295873 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:18:28.349711  295873 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:18:28.349736  295873 crio.go:433] Images already preloaded, skipping extraction
	I0908 11:18:28.349803  295873 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:18:28.385457  295873 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:18:28.385482  295873 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:18:28.385490  295873 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0908 11:18:28.385572  295873 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-953262 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-953262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:18:28.385649  295873 ssh_runner.go:195] Run: crio config
	I0908 11:18:28.459065  295873 cni.go:84] Creating CNI manager for ""
	I0908 11:18:28.459089  295873 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:18:28.459100  295873 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 11:18:28.459143  295873 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-953262 NodeName:addons-953262 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:18:28.459288  295873 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-953262"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:18:28.459374  295873 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:18:28.468063  295873 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:18:28.468191  295873 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 11:18:28.477021  295873 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0908 11:18:28.495030  295873 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:18:28.513252  295873 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 11:18:28.531580  295873 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 11:18:28.535201  295873 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:18:28.545737  295873 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:28.639233  295873 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:18:28.653094  295873 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262 for IP: 192.168.49.2
	I0908 11:18:28.653167  295873 certs.go:194] generating shared ca certs ...
	I0908 11:18:28.653208  295873 certs.go:226] acquiring lock for ca certs: {Name:mkec8a5dd4303f23225e4d611fe7863c5eaee420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:28.653387  295873 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-293252/.minikube/ca.key
	I0908 11:18:28.859907  295873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-293252/.minikube/ca.crt ...
	I0908 11:18:28.859940  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/ca.crt: {Name:mkd90da90a41d06acea549dc1bce791e8b51a922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:28.860174  295873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-293252/.minikube/ca.key ...
	I0908 11:18:28.860192  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/ca.key: {Name:mke434b914e15d835fd69ac4116eae9743e3484b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:28.860279  295873 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-293252/.minikube/proxy-client-ca.key
	I0908 11:18:29.085463  295873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-293252/.minikube/proxy-client-ca.crt ...
	I0908 11:18:29.085493  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/proxy-client-ca.crt: {Name:mka4be7ba1a68dadca804a6e6f6c450dc8fcfd0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:29.086326  295873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-293252/.minikube/proxy-client-ca.key ...
	I0908 11:18:29.086342  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/proxy-client-ca.key: {Name:mk9b1bc755b08565aff073c33696a727f9717c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:29.086431  295873 certs.go:256] generating profile certs ...
	I0908 11:18:29.086491  295873 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.key
	I0908 11:18:29.086507  295873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt with IP's: []
	I0908 11:18:29.724112  295873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt ...
	I0908 11:18:29.724147  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: {Name:mk1f4c24d12b37de1f514090f04e5c29a0bc7e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:29.724329  295873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.key ...
	I0908 11:18:29.724343  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.key: {Name:mkf3e4656b890bc257a9d4efe0fa15b33738431f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:29.724430  295873 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.key.3a7b61f5
	I0908 11:18:29.724449  295873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.crt.3a7b61f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0908 11:18:30.068363  295873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.crt.3a7b61f5 ...
	I0908 11:18:30.068402  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.crt.3a7b61f5: {Name:mk6a61c9bc5fc3fd456122c08ae2a855d1721ff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:30.068759  295873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.key.3a7b61f5 ...
	I0908 11:18:30.069991  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.key.3a7b61f5: {Name:mk2ed3b294e3cd0c0f15a501469eeacd6e393230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:30.070234  295873 certs.go:381] copying /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.crt.3a7b61f5 -> /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.crt
	I0908 11:18:30.084623  295873 certs.go:385] copying /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.key.3a7b61f5 -> /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.key
	I0908 11:18:30.084901  295873 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/proxy-client.key
	I0908 11:18:30.093134  295873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/proxy-client.crt with IP's: []
	I0908 11:18:30.424078  295873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/proxy-client.crt ...
	I0908 11:18:30.424112  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/proxy-client.crt: {Name:mkce9664db4c6339663d7d044f8f1aaf1beb2e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:30.424297  295873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/proxy-client.key ...
	I0908 11:18:30.424312  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/proxy-client.key: {Name:mk87ba72501ac5774c3c066b1068059fea45103b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:30.424509  295873 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 11:18:30.424550  295873 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca.pem (1078 bytes)
	I0908 11:18:30.424578  295873 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/cert.pem (1123 bytes)
	I0908 11:18:30.424612  295873 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/key.pem (1675 bytes)
	I0908 11:18:30.425230  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:18:30.450062  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 11:18:30.473593  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:18:30.497289  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 11:18:30.521217  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 11:18:30.544913  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 11:18:30.569095  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:18:30.595055  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0908 11:18:30.619558  295873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:18:30.645170  295873 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:18:30.667392  295873 ssh_runner.go:195] Run: openssl version
	I0908 11:18:30.672930  295873 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:18:30.682937  295873 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:18:30.686590  295873 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:18 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:18:30.686676  295873 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:18:30.693645  295873 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:18:30.703240  295873 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:18:30.706542  295873 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 11:18:30.706588  295873 kubeadm.go:392] StartCluster: {Name:addons-953262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-953262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:18:30.706704  295873 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 11:18:30.706805  295873 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:18:30.742907  295873 cri.go:89] found id: ""
	I0908 11:18:30.743144  295873 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 11:18:30.752005  295873 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 11:18:30.760467  295873 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 11:18:30.760551  295873 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 11:18:30.769521  295873 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 11:18:30.769540  295873 kubeadm.go:157] found existing configuration files:
	
	I0908 11:18:30.769597  295873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 11:18:30.778483  295873 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 11:18:30.778568  295873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 11:18:30.787497  295873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 11:18:30.796647  295873 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 11:18:30.796714  295873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 11:18:30.805473  295873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 11:18:30.814615  295873 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 11:18:30.814701  295873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 11:18:30.823529  295873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 11:18:30.832513  295873 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 11:18:30.832584  295873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 11:18:30.841221  295873 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 11:18:30.886540  295873 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 11:18:30.886603  295873 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 11:18:30.903045  295873 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 11:18:30.903119  295873 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0908 11:18:30.903160  295873 kubeadm.go:310] OS: Linux
	I0908 11:18:30.903214  295873 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 11:18:30.903266  295873 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 11:18:30.903319  295873 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 11:18:30.903427  295873 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 11:18:30.903526  295873 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 11:18:30.903646  295873 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 11:18:30.903731  295873 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 11:18:30.903817  295873 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 11:18:30.903903  295873 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 11:18:30.973445  295873 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 11:18:30.973562  295873 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 11:18:30.973660  295873 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 11:18:30.982260  295873 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 11:18:30.986012  295873 out.go:252]   - Generating certificates and keys ...
	I0908 11:18:30.986114  295873 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 11:18:30.986186  295873 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 11:18:31.122919  295873 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 11:18:32.455632  295873 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 11:18:32.865973  295873 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 11:18:33.172865  295873 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 11:18:33.959556  295873 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 11:18:33.959922  295873 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-953262 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 11:18:34.649674  295873 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 11:18:34.650155  295873 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-953262 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 11:18:35.353875  295873 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 11:18:35.896725  295873 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 11:18:37.058077  295873 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 11:18:37.058321  295873 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 11:18:38.069101  295873 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 11:18:38.561183  295873 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 11:18:38.786879  295873 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 11:18:39.095918  295873 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 11:18:39.304121  295873 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 11:18:39.304923  295873 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 11:18:39.307791  295873 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 11:18:39.311473  295873 out.go:252]   - Booting up control plane ...
	I0908 11:18:39.311600  295873 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 11:18:39.311684  295873 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 11:18:39.313310  295873 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 11:18:39.322918  295873 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 11:18:39.323301  295873 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 11:18:39.330174  295873 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 11:18:39.330497  295873 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 11:18:39.330545  295873 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 11:18:39.426756  295873 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 11:18:39.426892  295873 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 11:18:40.927914  295873 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501461152s
	I0908 11:18:40.931410  295873 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 11:18:40.931708  295873 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0908 11:18:40.931963  295873 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 11:18:40.932062  295873 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 11:18:43.839237  295873 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.907398142s
	I0908 11:18:47.313237  295873 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 6.381735705s
	I0908 11:18:47.934727  295873 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 7.003142145s
	I0908 11:18:47.957145  295873 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 11:18:47.971873  295873 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 11:18:47.986173  295873 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 11:18:47.986400  295873 kubeadm.go:310] [mark-control-plane] Marking the node addons-953262 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 11:18:47.998946  295873 kubeadm.go:310] [bootstrap-token] Using token: nkgjy0.55x1r73m0ypl73nf
	I0908 11:18:48.001894  295873 out.go:252]   - Configuring RBAC rules ...
	I0908 11:18:48.002022  295873 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 11:18:48.015030  295873 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 11:18:48.025073  295873 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 11:18:48.030082  295873 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 11:18:48.034035  295873 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 11:18:48.039131  295873 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 11:18:48.342582  295873 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 11:18:48.769267  295873 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 11:18:49.341395  295873 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 11:18:49.346075  295873 kubeadm.go:310] 
	I0908 11:18:49.346157  295873 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 11:18:49.346169  295873 kubeadm.go:310] 
	I0908 11:18:49.346247  295873 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 11:18:49.346256  295873 kubeadm.go:310] 
	I0908 11:18:49.346282  295873 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 11:18:49.346345  295873 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 11:18:49.346401  295873 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 11:18:49.346425  295873 kubeadm.go:310] 
	I0908 11:18:49.346481  295873 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 11:18:49.346490  295873 kubeadm.go:310] 
	I0908 11:18:49.346538  295873 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 11:18:49.346547  295873 kubeadm.go:310] 
	I0908 11:18:49.346600  295873 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 11:18:49.346680  295873 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 11:18:49.346752  295873 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 11:18:49.346760  295873 kubeadm.go:310] 
	I0908 11:18:49.346853  295873 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 11:18:49.346937  295873 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 11:18:49.346946  295873 kubeadm.go:310] 
	I0908 11:18:49.347031  295873 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nkgjy0.55x1r73m0ypl73nf \
	I0908 11:18:49.347139  295873 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d6bfa20265330d555f5244c2fc2b3a5259b71bdb6a132f37c6ddb259ab09e190 \
	I0908 11:18:49.347168  295873 kubeadm.go:310] 	--control-plane 
	I0908 11:18:49.347179  295873 kubeadm.go:310] 
	I0908 11:18:49.347265  295873 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 11:18:49.347273  295873 kubeadm.go:310] 
	I0908 11:18:49.347356  295873 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nkgjy0.55x1r73m0ypl73nf \
	I0908 11:18:49.347463  295873 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d6bfa20265330d555f5244c2fc2b3a5259b71bdb6a132f37c6ddb259ab09e190 
	I0908 11:18:49.348865  295873 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 11:18:49.349094  295873 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0908 11:18:49.349205  295873 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 11:18:49.349231  295873 cni.go:84] Creating CNI manager for ""
	I0908 11:18:49.349243  295873 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:18:49.352379  295873 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 11:18:49.355286  295873 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 11:18:49.361088  295873 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 11:18:49.361113  295873 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 11:18:49.381450  295873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 11:18:49.667346  295873 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 11:18:49.667410  295873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:18:49.667486  295873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-953262 minikube.k8s.io/updated_at=2025_09_08T11_18_49_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=addons-953262 minikube.k8s.io/primary=true
	I0908 11:18:49.838549  295873 ops.go:34] apiserver oom_adj: -16
	I0908 11:18:49.838664  295873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:18:50.339356  295873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:18:50.839049  295873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:18:51.339097  295873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:18:51.838770  295873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:18:52.338817  295873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:18:52.839186  295873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:18:53.339576  295873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:18:53.430102  295873 kubeadm.go:1105] duration metric: took 3.762745556s to wait for elevateKubeSystemPrivileges
	I0908 11:18:53.430131  295873 kubeadm.go:394] duration metric: took 22.723547445s to StartCluster
	I0908 11:18:53.430149  295873 settings.go:142] acquiring lock: {Name:mkbde80afcd769206bcbb25bd8990d83418a87bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:53.430269  295873 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-293252/kubeconfig
	I0908 11:18:53.430666  295873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/kubeconfig: {Name:mk390277a44357409639aba3926256bcd9fea3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:18:53.430896  295873 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:18:53.431053  295873 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 11:18:53.431311  295873 config.go:182] Loaded profile config "addons-953262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:18:53.431360  295873 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 11:18:53.431440  295873 addons.go:69] Setting yakd=true in profile "addons-953262"
	I0908 11:18:53.431459  295873 addons.go:238] Setting addon yakd=true in "addons-953262"
	I0908 11:18:53.431487  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.431980  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.432123  295873 addons.go:69] Setting inspektor-gadget=true in profile "addons-953262"
	I0908 11:18:53.432149  295873 addons.go:238] Setting addon inspektor-gadget=true in "addons-953262"
	I0908 11:18:53.432176  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.432563  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.433019  295873 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-953262"
	I0908 11:18:53.433047  295873 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-953262"
	I0908 11:18:53.433070  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.433522  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.436689  295873 addons.go:69] Setting metrics-server=true in profile "addons-953262"
	I0908 11:18:53.436763  295873 addons.go:238] Setting addon metrics-server=true in "addons-953262"
	I0908 11:18:53.436842  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.437325  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.439580  295873 addons.go:69] Setting cloud-spanner=true in profile "addons-953262"
	I0908 11:18:53.439619  295873 addons.go:238] Setting addon cloud-spanner=true in "addons-953262"
	I0908 11:18:53.439649  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.440096  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.441505  295873 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-953262"
	I0908 11:18:53.441572  295873 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-953262"
	I0908 11:18:53.441601  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.443487  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.457384  295873 addons.go:69] Setting default-storageclass=true in profile "addons-953262"
	I0908 11:18:53.457475  295873 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-953262"
	I0908 11:18:53.457935  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.459881  295873 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-953262"
	I0908 11:18:53.459957  295873 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-953262"
	I0908 11:18:53.460041  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.460607  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.475873  295873 addons.go:69] Setting registry=true in profile "addons-953262"
	I0908 11:18:53.475967  295873 addons.go:238] Setting addon registry=true in "addons-953262"
	I0908 11:18:53.476036  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.476568  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.481305  295873 addons.go:69] Setting gcp-auth=true in profile "addons-953262"
	I0908 11:18:53.481401  295873 mustload.go:65] Loading cluster: addons-953262
	I0908 11:18:53.481667  295873 config.go:182] Loaded profile config "addons-953262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:18:53.482087  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.493795  295873 addons.go:69] Setting ingress=true in profile "addons-953262"
	I0908 11:18:53.528769  295873 addons.go:238] Setting addon ingress=true in "addons-953262"
	I0908 11:18:53.528855  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.529466  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.493813  295873 addons.go:69] Setting ingress-dns=true in profile "addons-953262"
	I0908 11:18:53.551051  295873 addons.go:238] Setting addon ingress-dns=true in "addons-953262"
	I0908 11:18:53.551137  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.551774  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.493940  295873 out.go:179] * Verifying Kubernetes components...
	I0908 11:18:53.512560  295873 addons.go:69] Setting registry-creds=true in profile "addons-953262"
	I0908 11:18:53.568861  295873 addons.go:238] Setting addon registry-creds=true in "addons-953262"
	I0908 11:18:53.568914  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.569468  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.512573  295873 addons.go:69] Setting storage-provisioner=true in profile "addons-953262"
	I0908 11:18:53.600299  295873 addons.go:238] Setting addon storage-provisioner=true in "addons-953262"
	I0908 11:18:53.600343  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.600891  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.512580  295873 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-953262"
	I0908 11:18:53.630423  295873 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-953262"
	I0908 11:18:53.630817  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.512597  295873 addons.go:69] Setting volcano=true in profile "addons-953262"
	I0908 11:18:53.639987  295873 addons.go:238] Setting addon volcano=true in "addons-953262"
	I0908 11:18:53.640060  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.640648  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.643032  295873 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:18:53.512600  295873 addons.go:69] Setting volumesnapshots=true in profile "addons-953262"
	I0908 11:18:53.655366  295873 addons.go:238] Setting addon volumesnapshots=true in "addons-953262"
	I0908 11:18:53.655412  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.656019  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.660490  295873 addons.go:238] Setting addon default-storageclass=true in "addons-953262"
	I0908 11:18:53.660536  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.666455  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.690146  295873 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 11:18:53.691523  295873 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 11:18:53.691611  295873 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0908 11:18:53.691617  295873 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 11:18:53.727415  295873 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 11:18:53.691850  295873 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 11:18:53.693292  295873 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 11:18:53.731207  295873 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 11:18:53.731292  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.731984  295873 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 11:18:53.732044  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 11:18:53.732130  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.730298  295873 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 11:18:53.742156  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 11:18:53.742358  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.756396  295873 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0908 11:18:53.759390  295873 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 11:18:53.759413  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 11:18:53.759488  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.730307  295873 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 11:18:53.759896  295873 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 11:18:53.759939  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.730325  295873 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 11:18:53.771891  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 11:18:53.771969  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.774393  295873 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 11:18:53.774676  295873 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 11:18:53.774826  295873 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 11:18:53.774852  295873 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 11:18:53.774928  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.730388  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.826809  295873 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 11:18:53.830033  295873 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 11:18:53.830058  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 11:18:53.830122  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	W0908 11:18:53.852483  295873 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 11:18:53.860145  295873 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-953262"
	I0908 11:18:53.860188  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:18:53.860591  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:18:53.864387  295873 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 11:18:53.865692  295873 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 11:18:53.865976  295873 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 11:18:53.866303  295873 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 11:18:53.866441  295873 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 11:18:53.866500  295873 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 11:18:53.870031  295873 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 11:18:53.870107  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.876931  295873 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 11:18:53.876958  295873 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 11:18:53.877024  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.866535  295873 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 11:18:53.866905  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:53.896263  295873 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 11:18:53.896482  295873 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 11:18:53.896506  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 11:18:53.896571  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.910610  295873 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:18:53.910635  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 11:18:53.910713  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.941984  295873 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 11:18:53.944976  295873 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 11:18:53.951739  295873 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 11:18:53.954597  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:53.956116  295873 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 11:18:53.957279  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:53.958047  295873 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 11:18:53.958169  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:53.961311  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:53.966520  295873 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 11:18:53.966541  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 11:18:53.966608  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:53.972115  295873 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 11:18:53.975103  295873 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 11:18:53.977981  295873 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 11:18:53.978008  295873 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 11:18:53.978080  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:54.014663  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:54.017952  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:54.031425  295873 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:18:54.053900  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:54.103601  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:54.104045  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:54.106824  295873 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 11:18:54.113222  295873 out.go:179]   - Using image docker.io/busybox:stable
	I0908 11:18:54.122422  295873 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 11:18:54.122444  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 11:18:54.122632  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:18:54.123873  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	W0908 11:18:54.125464  295873 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 11:18:54.125499  295873 retry.go:31] will retry after 158.082391ms: ssh: handshake failed: EOF
	I0908 11:18:54.127217  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:54.139067  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:54.143144  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:54.173853  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:18:54.422284  295873 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:18:54.422348  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 11:18:54.457139  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:18:54.481185  295873 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 11:18:54.481206  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 11:18:54.501920  295873 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 11:18:54.501996  295873 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 11:18:54.523957  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 11:18:54.531922  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 11:18:54.547131  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 11:18:54.547858  295873 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 11:18:54.547908  295873 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 11:18:54.558580  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 11:18:54.562433  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:18:54.605907  295873 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 11:18:54.605990  295873 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 11:18:54.630685  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 11:18:54.636738  295873 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 11:18:54.636812  295873 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 11:18:54.653070  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 11:18:54.663278  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:18:54.692292  295873 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 11:18:54.692368  295873 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 11:18:54.728935  295873 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 11:18:54.729018  295873 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 11:18:54.750933  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 11:18:54.796471  295873 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 11:18:54.796551  295873 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 11:18:54.808239  295873 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:18:54.808319  295873 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 11:18:54.885400  295873 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 11:18:54.885480  295873 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 11:18:54.890031  295873 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 11:18:54.890109  295873 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 11:18:54.926530  295873 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 11:18:54.926604  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 11:18:55.004559  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:18:55.008070  295873 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 11:18:55.008157  295873 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 11:18:55.069614  295873 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 11:18:55.069686  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 11:18:55.091159  295873 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 11:18:55.091257  295873 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 11:18:55.128441  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 11:18:55.189226  295873 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 11:18:55.189303  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 11:18:55.248666  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 11:18:55.271490  295873 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 11:18:55.271570  295873 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 11:18:55.341123  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 11:18:55.405221  295873 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 11:18:55.405298  295873 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 11:18:55.522880  295873 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 11:18:55.522958  295873 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 11:18:55.576890  295873 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 11:18:55.576968  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 11:18:55.672490  295873 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 11:18:55.672566  295873 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0908 11:18:55.723716  295873 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 11:18:55.723791  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 11:18:55.837874  295873 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 11:18:55.837949  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0908 11:18:55.916297  295873 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 11:18:55.916373  295873 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 11:18:56.078280  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 11:18:57.072739  295873 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.20831611s)
	I0908 11:18:57.072820  295873 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0908 11:18:57.072929  295873 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.041480088s)
	I0908 11:18:57.073808  295873 node_ready.go:35] waiting up to 6m0s for node "addons-953262" to be "Ready" ...
	I0908 11:18:57.310775  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.853556987s)
	I0908 11:18:57.334138  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.810093541s)
	I0908 11:18:57.579291  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.047281929s)
	I0908 11:18:57.579393  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.032189349s)
	I0908 11:18:57.779539  295873 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-953262" context rescaled to 1 replicas
	I0908 11:18:58.393503  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.83484509s)
	I0908 11:18:58.670771  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.108263486s)
	W0908 11:18:58.670860  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:18:58.670896  295873 retry.go:31] will retry after 138.006168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:18:58.670989  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.040230961s)
	I0908 11:18:58.809684  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0908 11:18:59.237402  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:00.092294  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.439133998s)
	I0908 11:19:00.092395  295873 addons.go:479] Verifying addon ingress=true in "addons-953262"
	I0908 11:19:00.092717  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.429360725s)
	I0908 11:19:00.092815  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.341808018s)
	I0908 11:19:00.093071  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.088423019s)
	I0908 11:19:00.093097  295873 addons.go:479] Verifying addon metrics-server=true in "addons-953262"
	I0908 11:19:00.093133  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.964614377s)
	I0908 11:19:00.093149  295873 addons.go:479] Verifying addon registry=true in "addons-953262"
	I0908 11:19:00.093306  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.844558071s)
	I0908 11:19:00.093636  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.752432192s)
	W0908 11:19:00.093768  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 11:19:00.093839  295873 retry.go:31] will retry after 346.073981ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 11:19:00.098045  295873 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-953262 service yakd-dashboard -n yakd-dashboard
	
	I0908 11:19:00.098069  295873 out.go:179] * Verifying ingress addon...
	I0908 11:19:00.098181  295873 out.go:179] * Verifying registry addon...
	I0908 11:19:00.103074  295873 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 11:19:00.103113  295873 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 11:19:00.166110  295873 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 11:19:00.166142  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:00.167120  295873 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 11:19:00.167182  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:00.440164  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 11:19:00.565330  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.486952988s)
	I0908 11:19:00.565417  295873 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-953262"
	I0908 11:19:00.565685  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.755927963s)
	W0908 11:19:00.565738  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:00.565809  295873 retry.go:31] will retry after 366.748433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:00.569147  295873 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 11:19:00.573075  295873 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 11:19:00.603881  295873 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 11:19:00.604131  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:00.700064  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:00.700136  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:00.932847  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:19:01.079279  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:01.107990  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:01.108469  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:19:01.577900  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:01.579135  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:01.679253  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:01.679403  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:02.082653  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:02.111744  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:02.111938  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:02.579158  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:02.679407  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:02.679558  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:03.086511  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:03.107192  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:03.107347  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:03.178343  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.738086555s)
	I0908 11:19:03.178437  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.245560378s)
	W0908 11:19:03.178461  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:03.178476  295873 retry.go:31] will retry after 466.697865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:03.578074  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:03.645976  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:19:03.679193  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:03.679592  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:19:04.079796  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:04.080236  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:04.107760  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:04.108447  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:04.420672  295873 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 11:19:04.420753  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:19:04.441882  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	W0908 11:19:04.472291  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:04.472328  295873 retry.go:31] will retry after 973.203698ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:04.556796  295873 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 11:19:04.575644  295873 addons.go:238] Setting addon gcp-auth=true in "addons-953262"
	I0908 11:19:04.575700  295873 host.go:66] Checking if "addons-953262" exists ...
	I0908 11:19:04.576162  295873 cli_runner.go:164] Run: docker container inspect addons-953262 --format={{.State.Status}}
	I0908 11:19:04.578189  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:04.597808  295873 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 11:19:04.597868  295873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-953262
	I0908 11:19:04.607236  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:04.610100  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:04.616013  295873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/addons-953262/id_rsa Username:docker}
	I0908 11:19:04.708361  295873 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 11:19:04.711248  295873 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 11:19:04.714135  295873 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 11:19:04.714155  295873 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 11:19:04.733027  295873 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 11:19:04.733052  295873 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 11:19:04.751530  295873 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 11:19:04.751601  295873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 11:19:04.770241  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 11:19:05.077175  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:05.114528  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:05.184073  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:05.275234  295873 addons.go:479] Verifying addon gcp-auth=true in "addons-953262"
	I0908 11:19:05.278236  295873 out.go:179] * Verifying gcp-auth addon...
	I0908 11:19:05.281943  295873 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 11:19:05.289466  295873 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 11:19:05.289490  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:05.446492  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:19:05.582459  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:05.607500  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:05.607968  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:05.785177  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:06.080093  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:06.109000  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:06.109657  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:19:06.263895  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:06.263934  295873 retry.go:31] will retry after 846.907138ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:06.284770  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:06.577299  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:19:06.577472  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:06.608279  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:06.608450  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:06.785953  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:07.077487  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:07.107138  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:07.107705  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:07.111824  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:19:07.284946  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:07.579350  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:07.608093  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:07.608172  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:07.790118  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:19:07.932070  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:07.932114  295873 retry.go:31] will retry after 1.375485017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:08.077709  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:08.107121  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:08.107445  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:08.285395  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:08.576844  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:08.607387  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:08.607948  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:08.786173  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:09.077058  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:19:09.077670  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:09.106728  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:09.107051  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:09.285060  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:09.308245  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:19:09.578037  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:09.607952  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:09.608400  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:09.784838  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:10.081209  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:10.108105  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:10.108517  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:19:10.136464  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:10.136547  295873 retry.go:31] will retry after 3.928236605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:10.286053  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:10.575940  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:10.607635  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:10.607705  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:10.785360  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:11.076816  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:11.107165  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:11.107277  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:11.285255  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:11.577060  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:19:11.577181  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:11.606388  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:11.606666  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:11.785452  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:12.077502  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:12.107000  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:12.107405  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:12.285320  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:12.578013  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:12.606664  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:12.606969  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:12.786163  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:13.076993  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:13.106528  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:13.106829  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:13.284651  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:13.577108  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:19:13.577506  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:13.606605  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:13.607076  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:13.784843  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:14.065415  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:19:14.079773  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:14.108452  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:14.108942  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:14.285337  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:14.577961  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:14.607765  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:14.608121  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:14.787628  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:19:14.907008  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:14.907038  295873 retry.go:31] will retry after 5.052464927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:15.078328  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:15.106384  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:15.106520  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:15.285435  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:15.577399  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:15.606601  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:15.607469  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:15.786030  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:19:16.078189  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:16.078690  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:16.106902  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:16.107052  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:16.284715  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:16.577024  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:16.607065  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:16.607280  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:16.785464  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:17.076611  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:17.106895  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:17.106988  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:17.286550  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:17.577341  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:17.606912  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:17.607031  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:17.785898  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:18.077267  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:18.106574  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:18.106579  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:18.285419  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:19:18.576638  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:18.576774  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:18.606586  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:18.607033  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:18.785535  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:19.076877  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:19.107049  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:19.107233  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:19.285882  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:19.576960  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:19.606737  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:19.607240  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:19.785306  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:19.960568  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:19:20.077621  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:20.108291  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:20.108505  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:20.285547  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:20.578238  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:19:20.578380  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:20.606158  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:20.607683  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:20.785834  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:19:20.818092  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:20.818128  295873 retry.go:31] will retry after 8.888064261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:21.077873  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:21.107396  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:21.107803  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:21.284696  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:21.576857  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:21.606367  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:21.606895  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:21.784717  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:22.076797  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:22.106780  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:22.107127  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:22.284879  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:19:22.579656  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:22.586275  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:22.607921  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:22.609501  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:22.786167  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:23.076770  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:23.106731  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:23.106934  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:23.284875  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:23.576799  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:23.612179  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:23.612354  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:23.785336  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:24.077134  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:24.107472  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:24.107615  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:24.285188  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:24.577368  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:24.606578  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:24.607017  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:24.784934  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:25.076211  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:19:25.077433  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:25.106724  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:25.106987  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:25.284990  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:25.576618  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:25.606042  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:25.606463  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:25.785636  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:26.077159  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:26.106580  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:26.106645  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:26.285515  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:26.576899  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:26.606494  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:26.606684  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:26.785696  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:27.077588  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:19:27.078121  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:27.106204  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:27.106521  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:27.285296  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:27.577281  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:27.606509  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:27.606731  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:27.785911  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:28.078077  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:28.106382  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:28.107180  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:28.284928  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:28.577353  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:28.606554  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:28.607151  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:28.784933  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:29.077546  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:29.106494  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:29.106911  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:29.284743  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:29.577640  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:19:29.581758  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:29.607398  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:29.607872  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:29.707080  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:19:29.785113  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:30.091435  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:30.114088  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:30.114691  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:30.285409  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:30.578192  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:30.607104  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:30.607211  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:19:30.614890  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:30.614934  295873 retry.go:31] will retry after 11.58261432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:30.784688  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:31.077323  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:31.106572  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:31.106640  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:31.285655  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:31.577242  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:31.606917  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:31.607125  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:31.784776  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:19:32.077586  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:32.077799  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:32.106943  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:32.107122  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:32.285117  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:32.576355  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:32.607121  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:32.607274  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:32.785335  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:33.077070  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:33.107281  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:33.107567  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:33.285445  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:33.576986  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:33.607118  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:33.607341  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:33.785401  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:34.076794  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:34.106907  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:34.107253  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:34.285102  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:34.576241  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:19:34.577567  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:34.607135  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:34.607180  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:34.785258  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:35.077353  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:35.106850  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:35.106952  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:35.285843  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:35.577372  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:35.606825  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:35.607077  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:35.785188  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:36.076926  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:36.107207  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:36.107501  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:36.285523  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:36.577899  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:19:36.577991  295873 node_ready.go:57] node "addons-953262" has "Ready":"False" status (will retry)
	I0908 11:19:36.607483  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:36.607930  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:36.784672  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:37.078017  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:37.106513  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:37.106785  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:37.285507  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:37.577289  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:37.606483  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:37.606662  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:37.785599  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:38.077433  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:38.138292  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:38.138534  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:38.315427  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:38.619272  295873 node_ready.go:49] node "addons-953262" is "Ready"
	I0908 11:19:38.619352  295873 node_ready.go:38] duration metric: took 41.545511791s for node "addons-953262" to be "Ready" ...
	I0908 11:19:38.619430  295873 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:19:38.619614  295873 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:19:38.619676  295873 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 11:19:38.619764  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:38.664972  295873 api_server.go:72] duration metric: took 45.234037264s to wait for apiserver process to appear ...
	I0908 11:19:38.664993  295873 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:19:38.665013  295873 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0908 11:19:38.713849  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:38.714274  295873 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 11:19:38.714332  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:38.717410  295873 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0908 11:19:38.720540  295873 api_server.go:141] control plane version: v1.34.0
	I0908 11:19:38.720615  295873 api_server.go:131] duration metric: took 55.614188ms to wait for apiserver health ...
	I0908 11:19:38.720639  295873 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:19:38.771277  295873 system_pods.go:59] 19 kube-system pods found
	I0908 11:19:38.771368  295873 system_pods.go:61] "coredns-66bc5c9577-rpw66" [ce14cd93-e17d-44b0-b03b-2ecb01c6d315] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:19:38.771393  295873 system_pods.go:61] "csi-hostpath-attacher-0" [69ea6721-168e-4d51-a56d-9896d5b70aec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:19:38.771429  295873 system_pods.go:61] "csi-hostpath-resizer-0" [766ade2d-7882-4ea6-94f1-1d28a355c22b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:19:38.771456  295873 system_pods.go:61] "csi-hostpathplugin-pb6gs" [c6758e0d-5ff6-4575-8dce-d8c329a42c46] Pending
	I0908 11:19:38.771479  295873 system_pods.go:61] "etcd-addons-953262" [1aa8b173-04ed-43bb-9a47-8068798438c4] Running
	I0908 11:19:38.771513  295873 system_pods.go:61] "kindnet-tgklv" [20fe0a02-d98b-4148-b560-882b97e7903f] Running
	I0908 11:19:38.771534  295873 system_pods.go:61] "kube-apiserver-addons-953262" [8f857b8e-f1f1-4dd4-ab14-0779cf2e0c72] Running
	I0908 11:19:38.771563  295873 system_pods.go:61] "kube-controller-manager-addons-953262" [cce2e37b-3445-4907-91f9-ca331f033f0e] Running
	I0908 11:19:38.771601  295873 system_pods.go:61] "kube-ingress-dns-minikube" [7a8887d4-7b53-4d46-9329-f4374772c6a7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:19:38.771622  295873 system_pods.go:61] "kube-proxy-mbn2r" [47fe3619-fda2-4220-8cd9-424a6d348635] Running
	I0908 11:19:38.771645  295873 system_pods.go:61] "kube-scheduler-addons-953262" [93fd5476-aa5c-46f9-8d13-f0f386e121de] Running
	I0908 11:19:38.771682  295873 system_pods.go:61] "metrics-server-85b7d694d7-vwpjc" [7b15b119-1e30-40fd-b4d1-cfdd8e730836] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:19:38.771702  295873 system_pods.go:61] "nvidia-device-plugin-daemonset-qpzck" [5919a938-17d0-4a8f-bf0c-a3ad322bf9f6] Pending
	I0908 11:19:38.771725  295873 system_pods.go:61] "registry-66898fdd98-r97cb" [6184ed08-56f5-465e-9229-22073ec7b0fe] Pending
	I0908 11:19:38.771758  295873 system_pods.go:61] "registry-creds-764b6fb674-7cmbc" [780bb257-22c1-4c4f-82b7-33831f249f36] Pending
	I0908 11:19:38.771777  295873 system_pods.go:61] "registry-proxy-9sg29" [7781d699-5d8e-48fc-98fe-5024b3b9bed5] Pending
	I0908 11:19:38.771798  295873 system_pods.go:61] "snapshot-controller-7d9fbc56b8-c567k" [18b8d985-d101-4c86-8ec7-873e3753759e] Pending
	I0908 11:19:38.771842  295873 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jqd5n" [42ce9dde-f7b9-447b-ae7b-049f95b0d8c5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:38.771864  295873 system_pods.go:61] "storage-provisioner" [b9a3dbde-b010-4c7b-9c63-f7bbb214c164] Pending
	I0908 11:19:38.771887  295873 system_pods.go:74] duration metric: took 51.224444ms to wait for pod list to return data ...
	I0908 11:19:38.771919  295873 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:19:38.780027  295873 default_sa.go:45] found service account: "default"
	I0908 11:19:38.780102  295873 default_sa.go:55] duration metric: took 8.161667ms for default service account to be created ...
	I0908 11:19:38.780127  295873 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:19:38.803979  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:38.805226  295873 system_pods.go:86] 19 kube-system pods found
	I0908 11:19:38.805304  295873 system_pods.go:89] "coredns-66bc5c9577-rpw66" [ce14cd93-e17d-44b0-b03b-2ecb01c6d315] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:19:38.805328  295873 system_pods.go:89] "csi-hostpath-attacher-0" [69ea6721-168e-4d51-a56d-9896d5b70aec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:19:38.805365  295873 system_pods.go:89] "csi-hostpath-resizer-0" [766ade2d-7882-4ea6-94f1-1d28a355c22b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:19:38.805392  295873 system_pods.go:89] "csi-hostpathplugin-pb6gs" [c6758e0d-5ff6-4575-8dce-d8c329a42c46] Pending
	I0908 11:19:38.805412  295873 system_pods.go:89] "etcd-addons-953262" [1aa8b173-04ed-43bb-9a47-8068798438c4] Running
	I0908 11:19:38.805446  295873 system_pods.go:89] "kindnet-tgklv" [20fe0a02-d98b-4148-b560-882b97e7903f] Running
	I0908 11:19:38.805474  295873 system_pods.go:89] "kube-apiserver-addons-953262" [8f857b8e-f1f1-4dd4-ab14-0779cf2e0c72] Running
	I0908 11:19:38.805496  295873 system_pods.go:89] "kube-controller-manager-addons-953262" [cce2e37b-3445-4907-91f9-ca331f033f0e] Running
	I0908 11:19:38.805533  295873 system_pods.go:89] "kube-ingress-dns-minikube" [7a8887d4-7b53-4d46-9329-f4374772c6a7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:19:38.805561  295873 system_pods.go:89] "kube-proxy-mbn2r" [47fe3619-fda2-4220-8cd9-424a6d348635] Running
	I0908 11:19:38.805584  295873 system_pods.go:89] "kube-scheduler-addons-953262" [93fd5476-aa5c-46f9-8d13-f0f386e121de] Running
	I0908 11:19:38.805623  295873 system_pods.go:89] "metrics-server-85b7d694d7-vwpjc" [7b15b119-1e30-40fd-b4d1-cfdd8e730836] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:19:38.805644  295873 system_pods.go:89] "nvidia-device-plugin-daemonset-qpzck" [5919a938-17d0-4a8f-bf0c-a3ad322bf9f6] Pending
	I0908 11:19:38.805675  295873 system_pods.go:89] "registry-66898fdd98-r97cb" [6184ed08-56f5-465e-9229-22073ec7b0fe] Pending
	I0908 11:19:38.805703  295873 system_pods.go:89] "registry-creds-764b6fb674-7cmbc" [780bb257-22c1-4c4f-82b7-33831f249f36] Pending
	I0908 11:19:38.805724  295873 system_pods.go:89] "registry-proxy-9sg29" [7781d699-5d8e-48fc-98fe-5024b3b9bed5] Pending
	I0908 11:19:38.805747  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c567k" [18b8d985-d101-4c86-8ec7-873e3753759e] Pending
	I0908 11:19:38.805819  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jqd5n" [42ce9dde-f7b9-447b-ae7b-049f95b0d8c5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:38.805842  295873 system_pods.go:89] "storage-provisioner" [b9a3dbde-b010-4c7b-9c63-f7bbb214c164] Pending
	I0908 11:19:38.805894  295873 retry.go:31] will retry after 206.135481ms: missing components: kube-dns
	I0908 11:19:39.028256  295873 system_pods.go:86] 19 kube-system pods found
	I0908 11:19:39.028349  295873 system_pods.go:89] "coredns-66bc5c9577-rpw66" [ce14cd93-e17d-44b0-b03b-2ecb01c6d315] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:19:39.028374  295873 system_pods.go:89] "csi-hostpath-attacher-0" [69ea6721-168e-4d51-a56d-9896d5b70aec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:19:39.028410  295873 system_pods.go:89] "csi-hostpath-resizer-0" [766ade2d-7882-4ea6-94f1-1d28a355c22b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:19:39.028438  295873 system_pods.go:89] "csi-hostpathplugin-pb6gs" [c6758e0d-5ff6-4575-8dce-d8c329a42c46] Pending
	I0908 11:19:39.028460  295873 system_pods.go:89] "etcd-addons-953262" [1aa8b173-04ed-43bb-9a47-8068798438c4] Running
	I0908 11:19:39.028495  295873 system_pods.go:89] "kindnet-tgklv" [20fe0a02-d98b-4148-b560-882b97e7903f] Running
	I0908 11:19:39.028523  295873 system_pods.go:89] "kube-apiserver-addons-953262" [8f857b8e-f1f1-4dd4-ab14-0779cf2e0c72] Running
	I0908 11:19:39.028547  295873 system_pods.go:89] "kube-controller-manager-addons-953262" [cce2e37b-3445-4907-91f9-ca331f033f0e] Running
	I0908 11:19:39.028632  295873 system_pods.go:89] "kube-ingress-dns-minikube" [7a8887d4-7b53-4d46-9329-f4374772c6a7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:19:39.028653  295873 system_pods.go:89] "kube-proxy-mbn2r" [47fe3619-fda2-4220-8cd9-424a6d348635] Running
	I0908 11:19:39.028697  295873 system_pods.go:89] "kube-scheduler-addons-953262" [93fd5476-aa5c-46f9-8d13-f0f386e121de] Running
	I0908 11:19:39.028719  295873 system_pods.go:89] "metrics-server-85b7d694d7-vwpjc" [7b15b119-1e30-40fd-b4d1-cfdd8e730836] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:19:39.028743  295873 system_pods.go:89] "nvidia-device-plugin-daemonset-qpzck" [5919a938-17d0-4a8f-bf0c-a3ad322bf9f6] Pending
	I0908 11:19:39.028946  295873 system_pods.go:89] "registry-66898fdd98-r97cb" [6184ed08-56f5-465e-9229-22073ec7b0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:19:39.028984  295873 system_pods.go:89] "registry-creds-764b6fb674-7cmbc" [780bb257-22c1-4c4f-82b7-33831f249f36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:19:39.029019  295873 system_pods.go:89] "registry-proxy-9sg29" [7781d699-5d8e-48fc-98fe-5024b3b9bed5] Pending
	I0908 11:19:39.029050  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c567k" [18b8d985-d101-4c86-8ec7-873e3753759e] Pending
	I0908 11:19:39.029078  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jqd5n" [42ce9dde-f7b9-447b-ae7b-049f95b0d8c5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:39.029109  295873 system_pods.go:89] "storage-provisioner" [b9a3dbde-b010-4c7b-9c63-f7bbb214c164] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 11:19:39.029147  295873 retry.go:31] will retry after 258.040882ms: missing components: kube-dns
	I0908 11:19:39.081146  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:39.124535  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:39.124900  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:39.294829  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:39.300770  295873 system_pods.go:86] 19 kube-system pods found
	I0908 11:19:39.300888  295873 system_pods.go:89] "coredns-66bc5c9577-rpw66" [ce14cd93-e17d-44b0-b03b-2ecb01c6d315] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:19:39.300917  295873 system_pods.go:89] "csi-hostpath-attacher-0" [69ea6721-168e-4d51-a56d-9896d5b70aec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:19:39.300986  295873 system_pods.go:89] "csi-hostpath-resizer-0" [766ade2d-7882-4ea6-94f1-1d28a355c22b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:19:39.301039  295873 system_pods.go:89] "csi-hostpathplugin-pb6gs" [c6758e0d-5ff6-4575-8dce-d8c329a42c46] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 11:19:39.301069  295873 system_pods.go:89] "etcd-addons-953262" [1aa8b173-04ed-43bb-9a47-8068798438c4] Running
	I0908 11:19:39.301100  295873 system_pods.go:89] "kindnet-tgklv" [20fe0a02-d98b-4148-b560-882b97e7903f] Running
	I0908 11:19:39.301146  295873 system_pods.go:89] "kube-apiserver-addons-953262" [8f857b8e-f1f1-4dd4-ab14-0779cf2e0c72] Running
	I0908 11:19:39.301167  295873 system_pods.go:89] "kube-controller-manager-addons-953262" [cce2e37b-3445-4907-91f9-ca331f033f0e] Running
	I0908 11:19:39.301194  295873 system_pods.go:89] "kube-ingress-dns-minikube" [7a8887d4-7b53-4d46-9329-f4374772c6a7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:19:39.301232  295873 system_pods.go:89] "kube-proxy-mbn2r" [47fe3619-fda2-4220-8cd9-424a6d348635] Running
	I0908 11:19:39.301255  295873 system_pods.go:89] "kube-scheduler-addons-953262" [93fd5476-aa5c-46f9-8d13-f0f386e121de] Running
	I0908 11:19:39.301280  295873 system_pods.go:89] "metrics-server-85b7d694d7-vwpjc" [7b15b119-1e30-40fd-b4d1-cfdd8e730836] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:19:39.301317  295873 system_pods.go:89] "nvidia-device-plugin-daemonset-qpzck" [5919a938-17d0-4a8f-bf0c-a3ad322bf9f6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 11:19:39.301342  295873 system_pods.go:89] "registry-66898fdd98-r97cb" [6184ed08-56f5-465e-9229-22073ec7b0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:19:39.301371  295873 system_pods.go:89] "registry-creds-764b6fb674-7cmbc" [780bb257-22c1-4c4f-82b7-33831f249f36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:19:39.301411  295873 system_pods.go:89] "registry-proxy-9sg29" [7781d699-5d8e-48fc-98fe-5024b3b9bed5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 11:19:39.301434  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c567k" [18b8d985-d101-4c86-8ec7-873e3753759e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:39.301478  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jqd5n" [42ce9dde-f7b9-447b-ae7b-049f95b0d8c5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:39.301501  295873 system_pods.go:89] "storage-provisioner" [b9a3dbde-b010-4c7b-9c63-f7bbb214c164] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 11:19:39.301534  295873 retry.go:31] will retry after 363.620445ms: missing components: kube-dns
	I0908 11:19:39.577930  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:39.679404  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:39.679737  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:39.680342  295873 system_pods.go:86] 19 kube-system pods found
	I0908 11:19:39.680404  295873 system_pods.go:89] "coredns-66bc5c9577-rpw66" [ce14cd93-e17d-44b0-b03b-2ecb01c6d315] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:19:39.680429  295873 system_pods.go:89] "csi-hostpath-attacher-0" [69ea6721-168e-4d51-a56d-9896d5b70aec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:19:39.680451  295873 system_pods.go:89] "csi-hostpath-resizer-0" [766ade2d-7882-4ea6-94f1-1d28a355c22b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:19:39.680493  295873 system_pods.go:89] "csi-hostpathplugin-pb6gs" [c6758e0d-5ff6-4575-8dce-d8c329a42c46] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 11:19:39.680516  295873 system_pods.go:89] "etcd-addons-953262" [1aa8b173-04ed-43bb-9a47-8068798438c4] Running
	I0908 11:19:39.680542  295873 system_pods.go:89] "kindnet-tgklv" [20fe0a02-d98b-4148-b560-882b97e7903f] Running
	I0908 11:19:39.680576  295873 system_pods.go:89] "kube-apiserver-addons-953262" [8f857b8e-f1f1-4dd4-ab14-0779cf2e0c72] Running
	I0908 11:19:39.680596  295873 system_pods.go:89] "kube-controller-manager-addons-953262" [cce2e37b-3445-4907-91f9-ca331f033f0e] Running
	I0908 11:19:39.680620  295873 system_pods.go:89] "kube-ingress-dns-minikube" [7a8887d4-7b53-4d46-9329-f4374772c6a7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:19:39.680650  295873 system_pods.go:89] "kube-proxy-mbn2r" [47fe3619-fda2-4220-8cd9-424a6d348635] Running
	I0908 11:19:39.680674  295873 system_pods.go:89] "kube-scheduler-addons-953262" [93fd5476-aa5c-46f9-8d13-f0f386e121de] Running
	I0908 11:19:39.680708  295873 system_pods.go:89] "metrics-server-85b7d694d7-vwpjc" [7b15b119-1e30-40fd-b4d1-cfdd8e730836] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:19:39.680766  295873 system_pods.go:89] "nvidia-device-plugin-daemonset-qpzck" [5919a938-17d0-4a8f-bf0c-a3ad322bf9f6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 11:19:39.680789  295873 system_pods.go:89] "registry-66898fdd98-r97cb" [6184ed08-56f5-465e-9229-22073ec7b0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:19:39.680816  295873 system_pods.go:89] "registry-creds-764b6fb674-7cmbc" [780bb257-22c1-4c4f-82b7-33831f249f36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:19:39.680853  295873 system_pods.go:89] "registry-proxy-9sg29" [7781d699-5d8e-48fc-98fe-5024b3b9bed5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 11:19:39.680875  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c567k" [18b8d985-d101-4c86-8ec7-873e3753759e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:39.680916  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jqd5n" [42ce9dde-f7b9-447b-ae7b-049f95b0d8c5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:39.680945  295873 system_pods.go:89] "storage-provisioner" [b9a3dbde-b010-4c7b-9c63-f7bbb214c164] Running
	I0908 11:19:39.680989  295873 retry.go:31] will retry after 411.763233ms: missing components: kube-dns
	I0908 11:19:39.786211  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:40.077743  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:40.099292  295873 system_pods.go:86] 19 kube-system pods found
	I0908 11:19:40.099381  295873 system_pods.go:89] "coredns-66bc5c9577-rpw66" [ce14cd93-e17d-44b0-b03b-2ecb01c6d315] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:19:40.099407  295873 system_pods.go:89] "csi-hostpath-attacher-0" [69ea6721-168e-4d51-a56d-9896d5b70aec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:19:40.099531  295873 system_pods.go:89] "csi-hostpath-resizer-0" [766ade2d-7882-4ea6-94f1-1d28a355c22b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:19:40.099565  295873 system_pods.go:89] "csi-hostpathplugin-pb6gs" [c6758e0d-5ff6-4575-8dce-d8c329a42c46] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 11:19:40.099592  295873 system_pods.go:89] "etcd-addons-953262" [1aa8b173-04ed-43bb-9a47-8068798438c4] Running
	I0908 11:19:40.099628  295873 system_pods.go:89] "kindnet-tgklv" [20fe0a02-d98b-4148-b560-882b97e7903f] Running
	I0908 11:19:40.099656  295873 system_pods.go:89] "kube-apiserver-addons-953262" [8f857b8e-f1f1-4dd4-ab14-0779cf2e0c72] Running
	I0908 11:19:40.099679  295873 system_pods.go:89] "kube-controller-manager-addons-953262" [cce2e37b-3445-4907-91f9-ca331f033f0e] Running
	I0908 11:19:40.099711  295873 system_pods.go:89] "kube-ingress-dns-minikube" [7a8887d4-7b53-4d46-9329-f4374772c6a7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:19:40.099732  295873 system_pods.go:89] "kube-proxy-mbn2r" [47fe3619-fda2-4220-8cd9-424a6d348635] Running
	I0908 11:19:40.099764  295873 system_pods.go:89] "kube-scheduler-addons-953262" [93fd5476-aa5c-46f9-8d13-f0f386e121de] Running
	I0908 11:19:40.099798  295873 system_pods.go:89] "metrics-server-85b7d694d7-vwpjc" [7b15b119-1e30-40fd-b4d1-cfdd8e730836] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:19:40.099820  295873 system_pods.go:89] "nvidia-device-plugin-daemonset-qpzck" [5919a938-17d0-4a8f-bf0c-a3ad322bf9f6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 11:19:40.099925  295873 system_pods.go:89] "registry-66898fdd98-r97cb" [6184ed08-56f5-465e-9229-22073ec7b0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:19:40.099950  295873 system_pods.go:89] "registry-creds-764b6fb674-7cmbc" [780bb257-22c1-4c4f-82b7-33831f249f36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:19:40.099987  295873 system_pods.go:89] "registry-proxy-9sg29" [7781d699-5d8e-48fc-98fe-5024b3b9bed5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 11:19:40.100011  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c567k" [18b8d985-d101-4c86-8ec7-873e3753759e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:40.100039  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jqd5n" [42ce9dde-f7b9-447b-ae7b-049f95b0d8c5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:40.100075  295873 system_pods.go:89] "storage-provisioner" [b9a3dbde-b010-4c7b-9c63-f7bbb214c164] Running
	I0908 11:19:40.100118  295873 retry.go:31] will retry after 576.344428ms: missing components: kube-dns
	I0908 11:19:40.107934  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:40.108176  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:40.285616  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:40.576973  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:40.608849  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:40.609268  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:40.705382  295873 system_pods.go:86] 19 kube-system pods found
	I0908 11:19:40.707165  295873 system_pods.go:89] "coredns-66bc5c9577-rpw66" [ce14cd93-e17d-44b0-b03b-2ecb01c6d315] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:19:40.707189  295873 system_pods.go:89] "csi-hostpath-attacher-0" [69ea6721-168e-4d51-a56d-9896d5b70aec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:19:40.707209  295873 system_pods.go:89] "csi-hostpath-resizer-0" [766ade2d-7882-4ea6-94f1-1d28a355c22b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:19:40.707222  295873 system_pods.go:89] "csi-hostpathplugin-pb6gs" [c6758e0d-5ff6-4575-8dce-d8c329a42c46] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 11:19:40.707228  295873 system_pods.go:89] "etcd-addons-953262" [1aa8b173-04ed-43bb-9a47-8068798438c4] Running
	I0908 11:19:40.707240  295873 system_pods.go:89] "kindnet-tgklv" [20fe0a02-d98b-4148-b560-882b97e7903f] Running
	I0908 11:19:40.707246  295873 system_pods.go:89] "kube-apiserver-addons-953262" [8f857b8e-f1f1-4dd4-ab14-0779cf2e0c72] Running
	I0908 11:19:40.707251  295873 system_pods.go:89] "kube-controller-manager-addons-953262" [cce2e37b-3445-4907-91f9-ca331f033f0e] Running
	I0908 11:19:40.707262  295873 system_pods.go:89] "kube-ingress-dns-minikube" [7a8887d4-7b53-4d46-9329-f4374772c6a7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:19:40.707267  295873 system_pods.go:89] "kube-proxy-mbn2r" [47fe3619-fda2-4220-8cd9-424a6d348635] Running
	I0908 11:19:40.707271  295873 system_pods.go:89] "kube-scheduler-addons-953262" [93fd5476-aa5c-46f9-8d13-f0f386e121de] Running
	I0908 11:19:40.707285  295873 system_pods.go:89] "metrics-server-85b7d694d7-vwpjc" [7b15b119-1e30-40fd-b4d1-cfdd8e730836] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:19:40.707293  295873 system_pods.go:89] "nvidia-device-plugin-daemonset-qpzck" [5919a938-17d0-4a8f-bf0c-a3ad322bf9f6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 11:19:40.707302  295873 system_pods.go:89] "registry-66898fdd98-r97cb" [6184ed08-56f5-465e-9229-22073ec7b0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:19:40.707308  295873 system_pods.go:89] "registry-creds-764b6fb674-7cmbc" [780bb257-22c1-4c4f-82b7-33831f249f36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:19:40.707315  295873 system_pods.go:89] "registry-proxy-9sg29" [7781d699-5d8e-48fc-98fe-5024b3b9bed5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 11:19:40.707321  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c567k" [18b8d985-d101-4c86-8ec7-873e3753759e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:40.707347  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jqd5n" [42ce9dde-f7b9-447b-ae7b-049f95b0d8c5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:40.707356  295873 system_pods.go:89] "storage-provisioner" [b9a3dbde-b010-4c7b-9c63-f7bbb214c164] Running
	I0908 11:19:40.707383  295873 retry.go:31] will retry after 671.297419ms: missing components: kube-dns
	I0908 11:19:40.806038  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:41.081631  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:41.182361  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:41.182598  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:41.286332  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:41.383765  295873 system_pods.go:86] 19 kube-system pods found
	I0908 11:19:41.383869  295873 system_pods.go:89] "coredns-66bc5c9577-rpw66" [ce14cd93-e17d-44b0-b03b-2ecb01c6d315] Running
	I0908 11:19:41.383898  295873 system_pods.go:89] "csi-hostpath-attacher-0" [69ea6721-168e-4d51-a56d-9896d5b70aec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:19:41.383930  295873 system_pods.go:89] "csi-hostpath-resizer-0" [766ade2d-7882-4ea6-94f1-1d28a355c22b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:19:41.383960  295873 system_pods.go:89] "csi-hostpathplugin-pb6gs" [c6758e0d-5ff6-4575-8dce-d8c329a42c46] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 11:19:41.383982  295873 system_pods.go:89] "etcd-addons-953262" [1aa8b173-04ed-43bb-9a47-8068798438c4] Running
	I0908 11:19:41.384016  295873 system_pods.go:89] "kindnet-tgklv" [20fe0a02-d98b-4148-b560-882b97e7903f] Running
	I0908 11:19:41.384037  295873 system_pods.go:89] "kube-apiserver-addons-953262" [8f857b8e-f1f1-4dd4-ab14-0779cf2e0c72] Running
	I0908 11:19:41.384068  295873 system_pods.go:89] "kube-controller-manager-addons-953262" [cce2e37b-3445-4907-91f9-ca331f033f0e] Running
	I0908 11:19:41.384106  295873 system_pods.go:89] "kube-ingress-dns-minikube" [7a8887d4-7b53-4d46-9329-f4374772c6a7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:19:41.384127  295873 system_pods.go:89] "kube-proxy-mbn2r" [47fe3619-fda2-4220-8cd9-424a6d348635] Running
	I0908 11:19:41.384150  295873 system_pods.go:89] "kube-scheduler-addons-953262" [93fd5476-aa5c-46f9-8d13-f0f386e121de] Running
	I0908 11:19:41.384190  295873 system_pods.go:89] "metrics-server-85b7d694d7-vwpjc" [7b15b119-1e30-40fd-b4d1-cfdd8e730836] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:19:41.384212  295873 system_pods.go:89] "nvidia-device-plugin-daemonset-qpzck" [5919a938-17d0-4a8f-bf0c-a3ad322bf9f6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 11:19:41.384245  295873 system_pods.go:89] "registry-66898fdd98-r97cb" [6184ed08-56f5-465e-9229-22073ec7b0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:19:41.384293  295873 system_pods.go:89] "registry-creds-764b6fb674-7cmbc" [780bb257-22c1-4c4f-82b7-33831f249f36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:19:41.384317  295873 system_pods.go:89] "registry-proxy-9sg29" [7781d699-5d8e-48fc-98fe-5024b3b9bed5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 11:19:41.384356  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c567k" [18b8d985-d101-4c86-8ec7-873e3753759e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:41.384379  295873 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jqd5n" [42ce9dde-f7b9-447b-ae7b-049f95b0d8c5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:19:41.384405  295873 system_pods.go:89] "storage-provisioner" [b9a3dbde-b010-4c7b-9c63-f7bbb214c164] Running
	I0908 11:19:41.384441  295873 system_pods.go:126] duration metric: took 2.604291797s to wait for k8s-apps to be running ...
	I0908 11:19:41.384476  295873 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:19:41.384561  295873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:19:41.411106  295873 system_svc.go:56] duration metric: took 26.621085ms WaitForService to wait for kubelet
	I0908 11:19:41.411189  295873 kubeadm.go:578] duration metric: took 47.980257091s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:19:41.411225  295873 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:19:41.414782  295873 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 11:19:41.414862  295873 node_conditions.go:123] node cpu capacity is 2
	I0908 11:19:41.414890  295873 node_conditions.go:105] duration metric: took 3.64127ms to run NodePressure ...
	I0908 11:19:41.414918  295873 start.go:241] waiting for startup goroutines ...
	I0908 11:19:41.577449  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:41.608247  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:41.608480  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:41.785319  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:42.081768  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:42.116838  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:42.117376  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:42.198757  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:19:42.286410  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:42.576798  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:42.608423  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:42.608740  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:42.786336  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:43.079283  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:43.109249  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:43.110436  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:43.285361  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:43.342526  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.1436783s)
	W0908 11:19:43.342567  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:43.342586  295873 retry.go:31] will retry after 17.212593448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:19:43.577798  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:43.609022  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:43.609435  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:43.786372  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:44.078094  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:44.106924  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:44.108107  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:44.285102  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:44.577113  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:44.608981  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:44.609669  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:44.785582  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:45.085758  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:45.155481  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:45.157917  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:45.286556  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:45.577640  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:45.608363  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:45.608732  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:45.785765  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:46.141593  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:46.195710  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:46.196659  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:46.285810  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:46.577904  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:46.607312  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:46.608404  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:46.785976  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:47.083920  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:47.182140  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:47.182882  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:47.285393  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:47.579754  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:47.610124  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:47.610573  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:47.787584  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:48.077464  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:48.108222  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:48.108392  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:48.286152  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:48.581598  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:48.611630  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:48.611767  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:48.785008  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:49.078170  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:49.106827  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:49.107191  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:49.286256  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:49.577061  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:49.609411  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:49.609886  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:49.789033  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:50.090750  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:50.112559  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:50.112750  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:50.298716  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:50.578527  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:50.608763  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:50.608937  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:50.785115  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:51.076924  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:51.119523  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:51.119987  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:51.285847  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:51.576857  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:51.611853  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:51.612721  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:51.786263  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:52.077533  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:52.109559  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:52.112021  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:52.307132  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:52.577223  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:52.608546  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:52.608996  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:52.785149  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:53.077022  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:53.108512  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:53.108679  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:53.286465  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:53.577272  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:53.606306  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:53.606905  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:53.785992  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:54.077009  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:54.108263  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:54.108864  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:54.285640  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:54.577904  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:54.607423  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:54.608559  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:54.787408  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:55.077292  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:55.108069  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:55.108380  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:55.289876  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:55.577451  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:55.608432  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:55.609136  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:55.785662  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:56.077735  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:56.107633  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:56.107769  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:56.288618  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:56.577941  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:56.615222  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:56.615498  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:56.784932  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:57.076498  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:57.108300  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:57.108775  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:57.285386  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:57.576732  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:57.607807  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:57.608177  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:57.786585  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:58.077954  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:58.106734  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:58.106879  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:58.285292  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:58.576449  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:58.606886  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:58.607110  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:58.785087  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:59.078251  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:59.122957  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:59.123309  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:59.287633  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:19:59.580304  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:19:59.614558  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:19:59.615017  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:19:59.785636  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:00.087524  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:00.119009  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:00.148683  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:00.288872  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:00.555583  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:20:00.583617  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:00.610572  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:00.632000  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:00.792162  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:01.077729  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:01.109454  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:01.110094  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:01.286284  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:01.578475  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:01.608972  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:01.609427  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:01.786309  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:01.869546  295873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.31391865s)
	W0908 11:20:01.869589  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:20:01.869616  295873 retry.go:31] will retry after 24.847946965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:20:02.085842  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:02.108718  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:02.109046  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:02.285484  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:02.576453  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:02.606888  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:02.607345  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:02.785154  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:03.078435  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:03.106862  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:03.107416  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:03.285428  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:03.577319  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:03.607412  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:03.607549  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:03.785412  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:04.078012  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:04.107325  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:04.107498  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:04.285735  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:04.577647  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:04.609000  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:04.609255  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:04.788329  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:05.080516  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:05.108116  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:05.109261  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:05.285220  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:05.577599  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:05.608471  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:05.609801  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:05.785859  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:06.076087  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:06.107573  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:06.107727  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:06.284770  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:06.576122  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:06.608619  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:06.609621  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:06.785411  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:07.077062  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:07.106703  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:07.106955  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:07.286214  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:07.577029  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:07.608148  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:07.608283  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:07.785706  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:08.079069  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:08.108630  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:08.109320  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:08.285970  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:08.577188  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:08.608304  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:08.608779  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:08.784885  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:09.078673  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:09.108457  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:09.108771  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:09.289391  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:09.583193  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:09.607417  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:09.607518  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:09.785654  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:10.078904  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:10.111008  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:10.111194  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:10.290039  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:10.576665  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:10.609019  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:10.609153  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:10.786522  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:11.077605  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:11.107801  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:11.111133  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:11.291770  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:11.578101  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:11.608034  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:11.608333  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:11.786270  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:12.078473  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:12.107687  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:12.107975  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:12.285703  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:12.577253  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:12.606363  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:12.606820  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:12.786047  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:13.076949  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:13.107582  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:13.107751  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:13.285293  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:13.576891  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:13.607410  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:13.607546  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:13.795377  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:14.082224  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:14.113621  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:14.179777  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:14.284513  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:14.586469  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:14.628082  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:14.628443  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:14.785913  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:15.078109  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:15.108382  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:15.108557  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:15.285579  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:15.576827  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:15.607166  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:15.607305  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:15.785587  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:16.077719  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:16.107184  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:16.107276  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:16.285325  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:16.576741  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:16.609127  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:16.611088  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:16.784736  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:17.078307  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:17.107321  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:17.108088  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:17.285296  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:17.577521  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:17.607689  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:17.607859  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:17.785066  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:18.076716  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:18.107581  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:18.107937  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:18.285555  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:18.577566  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:18.607764  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:18.609697  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:18.785375  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:19.078241  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:19.108577  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:19.109067  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:19.285655  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:19.577108  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:19.607929  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:19.609119  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:19.785157  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:20.078170  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:20.108550  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:20.108998  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:20.285135  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:20.588420  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:20.607654  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:20.607823  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:20.785825  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:21.078175  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:21.108681  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:21.109229  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:21.285371  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:21.577567  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:21.608312  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:21.608820  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:21.785470  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:22.077499  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:22.108535  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:22.108941  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:22.285637  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:22.577715  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:22.607385  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:22.607507  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:22.787864  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:23.078488  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:23.106084  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:23.107788  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:23.285820  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:23.577830  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:23.607484  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:23.608011  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:23.785670  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:24.079427  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:24.108532  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:24.111253  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:24.286662  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:24.577037  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:24.608027  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:24.608498  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:24.785514  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:25.079545  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:25.108335  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:25.108712  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:25.286475  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:25.591512  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:25.609757  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:25.609791  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:25.805113  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:26.077062  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:26.108475  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:26.108930  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:26.285831  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:26.577439  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:26.606339  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:26.608081  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:26.717827  295873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:20:26.790565  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:27.077966  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:27.108002  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:27.108650  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:27.286395  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:27.577450  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:27.607903  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:27.609030  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 11:20:27.641379  295873 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 11:20:27.641498  295873 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 11:20:27.785898  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:28.077728  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:28.106967  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:28.107188  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:28.284936  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:28.576782  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:28.607011  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:28.607545  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:28.785623  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:29.077559  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:29.107977  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:29.108742  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:29.285889  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:29.578190  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:29.608823  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:29.609219  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:29.789794  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:30.088603  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:30.129665  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:30.129865  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:30.285824  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:30.576920  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:30.616913  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:30.617472  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:30.785922  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:31.079816  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:31.107339  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:31.107851  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:31.291331  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:31.576848  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:31.607297  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:20:31.607483  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:31.786642  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:32.076578  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:32.107050  295873 kapi.go:107] duration metric: took 1m32.003928077s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 11:20:32.107866  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:32.286182  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:32.576702  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:32.606957  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:32.785727  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:33.078558  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:33.107198  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:33.285206  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:33.576516  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:33.606233  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:33.784985  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:34.076314  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:34.107326  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:34.287580  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:34.577848  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:34.607233  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:34.786858  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:35.078908  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:35.107286  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:35.285268  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:35.579405  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:35.608573  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:35.786646  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:36.079031  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:36.107581  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:36.286703  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:36.577330  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:36.607144  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:36.784953  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:37.080537  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:37.107447  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:37.285854  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:37.596787  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:37.635739  295873 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:20:37.786077  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:38.082488  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:38.109312  295873 kapi.go:107] duration metric: took 1m38.006240221s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 11:20:38.286081  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:38.578190  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:38.785627  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:39.079639  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:39.288151  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:39.576932  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:39.785542  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:40.079032  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:40.285176  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:40.629845  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:40.788335  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:41.077373  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:41.285369  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:41.576783  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:41.785752  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:42.077961  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:42.286873  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:42.577735  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:42.784778  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:43.078039  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:43.284902  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:43.576566  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:43.786469  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:44.077037  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:44.284818  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:44.576192  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:44.784786  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:45.083516  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:45.286068  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:45.576671  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:45.786349  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:46.078938  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:46.284978  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:46.577205  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:46.785233  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:47.077377  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:47.285909  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:47.576727  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:47.785090  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:48.077046  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:48.285588  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:48.578098  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:48.786521  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:49.077061  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:49.285611  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:49.577067  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:49.785441  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:50.078021  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:50.285388  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:50.576762  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:50.786111  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:51.076937  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:51.285399  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:51.576540  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:51.786417  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:52.079019  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:52.285640  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:52.578655  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:52.786369  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:53.076837  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:53.285957  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:53.576082  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:53.785649  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:54.076583  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:54.285642  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:54.576757  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:54.784847  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:55.078755  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:55.285808  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:55.576505  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:55.786248  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:56.078230  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:56.285284  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:56.577146  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:56.785622  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:57.076782  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:57.287049  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:57.577276  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:57.785697  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:58.077956  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:58.289470  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:58.577299  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:58.786064  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:59.076497  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:59.286246  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:20:59.591014  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:20:59.792226  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:21:00.105088  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:00.297247  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:21:00.583201  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:00.785956  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:21:01.077370  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:01.286132  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:21:01.577140  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:01.785912  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:21:02.076674  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:02.286101  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:21:02.576190  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:02.785130  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:21:03.083643  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:03.292389  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:21:03.577731  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:03.786393  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:21:04.085129  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:04.285762  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:21:04.577733  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:04.785699  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:21:05.077404  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:05.286612  295873 kapi.go:107] duration metric: took 2m0.004665615s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 11:21:05.289576  295873 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-953262 cluster.
	I0908 11:21:05.292398  295873 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 11:21:05.295253  295873 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 11:21:05.577090  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:06.077265  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:06.577223  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:07.076738  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:07.577071  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:08.083099  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:08.576730  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:09.078041  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:09.576642  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:10.077334  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:10.579230  295873 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:21:11.077067  295873 kapi.go:107] duration metric: took 2m10.503988355s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 11:21:11.080264  295873 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, amd-gpu-device-plugin, default-storageclass, ingress-dns, cloud-spanner, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0908 11:21:11.083300  295873 addons.go:514] duration metric: took 2m17.651911667s for enable addons: enabled=[nvidia-device-plugin registry-creds amd-gpu-device-plugin default-storageclass ingress-dns cloud-spanner storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0908 11:21:11.083421  295873 start.go:246] waiting for cluster config update ...
	I0908 11:21:11.083453  295873 start.go:255] writing updated cluster config ...
	I0908 11:21:11.083776  295873 ssh_runner.go:195] Run: rm -f paused
	I0908 11:21:11.088184  295873 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:21:11.091618  295873 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rpw66" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:11.097000  295873 pod_ready.go:94] pod "coredns-66bc5c9577-rpw66" is "Ready"
	I0908 11:21:11.097034  295873 pod_ready.go:86] duration metric: took 5.384547ms for pod "coredns-66bc5c9577-rpw66" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:11.099886  295873 pod_ready.go:83] waiting for pod "etcd-addons-953262" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:11.104872  295873 pod_ready.go:94] pod "etcd-addons-953262" is "Ready"
	I0908 11:21:11.104905  295873 pod_ready.go:86] duration metric: took 4.991557ms for pod "etcd-addons-953262" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:11.107341  295873 pod_ready.go:83] waiting for pod "kube-apiserver-addons-953262" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:11.112063  295873 pod_ready.go:94] pod "kube-apiserver-addons-953262" is "Ready"
	I0908 11:21:11.112089  295873 pod_ready.go:86] duration metric: took 4.708755ms for pod "kube-apiserver-addons-953262" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:11.114682  295873 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-953262" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:11.491970  295873 pod_ready.go:94] pod "kube-controller-manager-addons-953262" is "Ready"
	I0908 11:21:11.491998  295873 pod_ready.go:86] duration metric: took 377.29084ms for pod "kube-controller-manager-addons-953262" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:11.693028  295873 pod_ready.go:83] waiting for pod "kube-proxy-mbn2r" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:12.092982  295873 pod_ready.go:94] pod "kube-proxy-mbn2r" is "Ready"
	I0908 11:21:12.093012  295873 pod_ready.go:86] duration metric: took 399.911594ms for pod "kube-proxy-mbn2r" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:12.292523  295873 pod_ready.go:83] waiting for pod "kube-scheduler-addons-953262" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:12.691858  295873 pod_ready.go:94] pod "kube-scheduler-addons-953262" is "Ready"
	I0908 11:21:12.691888  295873 pod_ready.go:86] duration metric: took 399.334012ms for pod "kube-scheduler-addons-953262" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:21:12.691901  295873 pod_ready.go:40] duration metric: took 1.603681975s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:21:12.746682  295873 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:21:12.750567  295873 out.go:179] * Done! kubectl is now configured to use "addons-953262" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 11:23:49 addons-953262 crio[988]: time="2025-09-08 11:23:49.204524173Z" level=info msg="Removed pod sandbox: a2e91bd7f07be817d9857ba4641a3b50ca10f50530415fdf919d259e5cf92d37" id=c90b59c0-5ca7-4d2e-89dd-e895cea10b8b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.012467019Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-hqk9h/POD" id=f80fbbd6-d1bf-4c3f-920a-3d5808b662b2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.012537469Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.064822726Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-hqk9h Namespace:default ID:45366730e7d2c53aed67b0b10e4bba54d45fa669f59ac34aaf49591c978dac30 UID:ac54aff5-45af-4843-9f9a-322bbb34f3ab NetNS:/var/run/netns/2b22b63e-4ef2-4344-bd7b-2fa5abec3cfc Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.064889016Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-hqk9h to CNI network \"kindnet\" (type=ptp)"
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.082029387Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-hqk9h Namespace:default ID:45366730e7d2c53aed67b0b10e4bba54d45fa669f59ac34aaf49591c978dac30 UID:ac54aff5-45af-4843-9f9a-322bbb34f3ab NetNS:/var/run/netns/2b22b63e-4ef2-4344-bd7b-2fa5abec3cfc Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.082181906Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-hqk9h for CNI network kindnet (type=ptp)"
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.090395423Z" level=info msg="Ran pod sandbox 45366730e7d2c53aed67b0b10e4bba54d45fa669f59ac34aaf49591c978dac30 with infra container: default/hello-world-app-5d498dc89-hqk9h/POD" id=f80fbbd6-d1bf-4c3f-920a-3d5808b662b2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.091771671Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8af94ab9-6e7c-4c1a-95eb-d25b3578cd15 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.092023276Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=8af94ab9-6e7c-4c1a-95eb-d25b3578cd15 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.092885433Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=fe2abd8d-c339-4cbc-bf39-e42c16e8fe5d name=/runtime.v1.ImageService/PullImage
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.095269924Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 08 11:25:19 addons-953262 crio[988]: time="2025-09-08 11:25:19.334946593Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.022229077Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=fe2abd8d-c339-4cbc-bf39-e42c16e8fe5d name=/runtime.v1.ImageService/PullImage
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.023500881Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=4bf3c3ac-6909-4cca-916f-b473337aa3c8 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.024626820Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4bf3c3ac-6909-4cca-916f-b473337aa3c8 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.030337156Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=40c2ea0b-c522-4fcc-bfe3-2f1dc0a8b429 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.031520335Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=40c2ea0b-c522-4fcc-bfe3-2f1dc0a8b429 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.037167368Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-hqk9h/hello-world-app" id=8faa50ca-0af5-4299-809e-3fc9f0a76c49 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.037475860Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.064674597Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/44d510da2b39dfc48a93ebe77aedef14f3e20007e57a5fa4d93ef9948d62a81c/merged/etc/passwd: no such file or directory"
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.065008230Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/44d510da2b39dfc48a93ebe77aedef14f3e20007e57a5fa4d93ef9948d62a81c/merged/etc/group: no such file or directory"
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.154696174Z" level=info msg="Created container d7e24ea51e6b7477dbab7b0443294b487f10d63a59dfe8d9b58d6a204fd5d48c: default/hello-world-app-5d498dc89-hqk9h/hello-world-app" id=8faa50ca-0af5-4299-809e-3fc9f0a76c49 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.155482950Z" level=info msg="Starting container: d7e24ea51e6b7477dbab7b0443294b487f10d63a59dfe8d9b58d6a204fd5d48c" id=60faf0c3-f52e-4cb5-8ba8-2552743bff72 name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 11:25:20 addons-953262 crio[988]: time="2025-09-08 11:25:20.165925451Z" level=info msg="Started container" PID=9904 containerID=d7e24ea51e6b7477dbab7b0443294b487f10d63a59dfe8d9b58d6a204fd5d48c description=default/hello-world-app-5d498dc89-hqk9h/hello-world-app id=60faf0c3-f52e-4cb5-8ba8-2552743bff72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45366730e7d2c53aed67b0b10e4bba54d45fa669f59ac34aaf49591c978dac30
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	d7e24ea51e6b7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   45366730e7d2c       hello-world-app-5d498dc89-hqk9h
	0cee016f829c7       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   69b98df515661       nginx
	4602861714f4e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago            Running             busybox                   0                   902a13066866d       busybox
	8a054593ccc54       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:b3f8a40cecf84afd8a5299442eab04c52f913ef6194e01dc4fbeb783f9d42c58            4 minutes ago            Running             gadget                    0                   f2a5b666ac8ed       gadget-qxmbl
	6bc2ece4f2f17       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             4 minutes ago            Running             controller                0                   46ce4d17ab8e4       ingress-nginx-controller-9cc49f96f-jnxb8
	98f5da1bbc9ea       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   5 minutes ago            Exited              patch                     0                   22fa04193dc17       ingress-nginx-admission-patch-x5kd7
	262f31b4ce442       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   5 minutes ago            Exited              create                    0                   26ea09a259239       ingress-nginx-admission-create-l29nl
	d5a78877b102b       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958               5 minutes ago            Running             minikube-ingress-dns      0                   160b481ae1757       kube-ingress-dns-minikube
	58746423cf7cf       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                             5 minutes ago            Running             coredns                   0                   4abffe2460c76       coredns-66bc5c9577-rpw66
	81bfd15e7836c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner       0                   d5e3a3c6e1f95       storage-provisioner
	12b5c86fba9e1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                             6 minutes ago            Running             kindnet-cni               0                   1d9c718982e71       kindnet-tgklv
	418e07d77c93a       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                             6 minutes ago            Running             kube-proxy                0                   4f18539c06f5e       kube-proxy-mbn2r
	4f167fdeec08d       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                                             6 minutes ago            Running             kube-apiserver            0                   dea222a41d054       kube-apiserver-addons-953262
	dab8054e89f11       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                             6 minutes ago            Running             etcd                      0                   26bfa8e3a9441       etcd-addons-953262
	e5ca213c2a641       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                             6 minutes ago            Running             kube-scheduler            0                   0425374d062e0       kube-scheduler-addons-953262
	f42e51f841ef4       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                             6 minutes ago            Running             kube-controller-manager   0                   152506120c452       kube-controller-manager-addons-953262
	
	
	==> coredns [58746423cf7cf0ff680b6b3d3067aa802bac7a82e6e4417184e5b40f231f0b8d] <==
	[INFO] 10.244.0.17:44914 - 55309 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002755974s
	[INFO] 10.244.0.17:44914 - 30602 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000121019s
	[INFO] 10.244.0.17:44914 - 34758 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000340165s
	[INFO] 10.244.0.17:39261 - 3045 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177955s
	[INFO] 10.244.0.17:39261 - 3242 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000145117s
	[INFO] 10.244.0.17:60982 - 18220 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079869s
	[INFO] 10.244.0.17:60982 - 18640 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102902s
	[INFO] 10.244.0.17:33943 - 65065 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100777s
	[INFO] 10.244.0.17:33943 - 65510 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000156983s
	[INFO] 10.244.0.17:46734 - 16871 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001353692s
	[INFO] 10.244.0.17:46734 - 16434 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001413927s
	[INFO] 10.244.0.17:55111 - 58721 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000150911s
	[INFO] 10.244.0.17:55111 - 58557 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000253763s
	[INFO] 10.244.0.21:49671 - 19132 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000208003s
	[INFO] 10.244.0.21:57843 - 32868 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000104576s
	[INFO] 10.244.0.21:34314 - 29485 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000094804s
	[INFO] 10.244.0.21:39674 - 55741 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00008376s
	[INFO] 10.244.0.21:51421 - 60418 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097027s
	[INFO] 10.244.0.21:54834 - 19814 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070852s
	[INFO] 10.244.0.21:48710 - 47663 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002194761s
	[INFO] 10.244.0.21:47087 - 49905 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002796419s
	[INFO] 10.244.0.21:58428 - 62618 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000933378s
	[INFO] 10.244.0.21:60833 - 44105 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001518486s
	[INFO] 10.244.0.24:55191 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000207042s
	[INFO] 10.244.0.24:49747 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156465s
	
	
	==> describe nodes <==
	Name:               addons-953262
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-953262
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=addons-953262
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_18_49_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-953262
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:18:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-953262
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:25:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:23:23 +0000   Mon, 08 Sep 2025 11:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:23:23 +0000   Mon, 08 Sep 2025 11:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:23:23 +0000   Mon, 08 Sep 2025 11:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:23:23 +0000   Mon, 08 Sep 2025 11:19:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-953262
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 933a81e6b00e4994a6f25777dc978c62
	  System UUID:                ff2e5cd9-cd40-458c-a62a-261df4f3f7f8
	  Boot ID:                    96333a60-ea75-4725-84ac-97579709a820
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  default                     hello-world-app-5d498dc89-hqk9h             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-qxmbl                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-jnxb8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m22s
	  kube-system                 coredns-66bc5c9577-rpw66                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m26s
	  kube-system                 etcd-addons-953262                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m33s
	  kube-system                 kindnet-tgklv                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m28s
	  kube-system                 kube-apiserver-addons-953262                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-addons-953262       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-proxy-mbn2r                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-scheduler-addons-953262                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m20s                  kube-proxy       
	  Normal   Starting                 6m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m41s (x8 over 6m41s)  kubelet          Node addons-953262 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m41s (x8 over 6m41s)  kubelet          Node addons-953262 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m41s (x8 over 6m41s)  kubelet          Node addons-953262 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m33s                  kubelet          Node addons-953262 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m33s                  kubelet          Node addons-953262 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m33s                  kubelet          Node addons-953262 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m29s                  node-controller  Node addons-953262 event: Registered Node addons-953262 in Controller
	  Normal   NodeReady                5m43s                  kubelet          Node addons-953262 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 8 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.013821] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503648] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033978] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.739980] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.579788] kauditd_printk_skb: 36 callbacks suppressed
	[Sep 8 11:17] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [dab8054e89f110905910e3e08fc9594f84edc223ec4b3e8d6d6999a5fcdaaa6d] <==
	{"level":"warn","ts":"2025-09-08T11:18:55.268655Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.741016ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-tgklv\" limit:1 ","response":"range_response_count:1 size:3694"}
	{"level":"info","ts":"2025-09-08T11:18:55.268718Z","caller":"traceutil/trace.go:172","msg":"trace[601687004] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-tgklv; range_end:; response_count:1; response_revision:374; }","duration":"130.817284ms","start":"2025-09-08T11:18:55.137887Z","end":"2025-09-08T11:18:55.268705Z","steps":["trace[601687004] 'agreement among raft nodes before linearized reading'  (duration: 106.60748ms)","trace[601687004] 'range keys from in-memory index tree'  (duration: 23.507805ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T11:18:57.492628Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"235.086855ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128039829082516397 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:352 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4274 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-08T11:18:57.506911Z","caller":"traceutil/trace.go:172","msg":"trace[332355582] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"318.026669ms","start":"2025-09-08T11:18:57.188864Z","end":"2025-09-08T11:18:57.506890Z","steps":["trace[332355582] 'process raft request'  (duration: 57.615592ms)","trace[332355582] 'compare'  (duration: 234.973869ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T11:18:57.507052Z","caller":"traceutil/trace.go:172","msg":"trace[1857683862] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"272.501296ms","start":"2025-09-08T11:18:57.234460Z","end":"2025-09-08T11:18:57.506961Z","steps":["trace[1857683862] 'process raft request'  (duration: 272.429287ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:18:57.507090Z","caller":"traceutil/trace.go:172","msg":"trace[1661941489] linearizableReadLoop","detail":"{readStateIndex:408; appliedIndex:407; }","duration":"116.439552ms","start":"2025-09-08T11:18:57.390644Z","end":"2025-09-08T11:18:57.507084Z","steps":["trace[1661941489] 'read index received'  (duration: 96.566823ms)","trace[1661941489] 'applied index is now lower than readState.Index'  (duration: 19.871712ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T11:18:57.507054Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:18:57.188853Z","time spent":"318.12368ms","remote":"127.0.0.1:55204","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4323,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:352 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4274 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2025-09-08T11:18:57.507279Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.620093ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-addons-953262\" limit:1 ","response":"range_response_count:1 size:4969"}
	{"level":"info","ts":"2025-09-08T11:18:57.512448Z","caller":"traceutil/trace.go:172","msg":"trace[1634939553] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-addons-953262; range_end:; response_count:1; response_revision:397; }","duration":"121.791213ms","start":"2025-09-08T11:18:57.390639Z","end":"2025-09-08T11:18:57.512430Z","steps":["trace[1634939553] 'agreement among raft nodes before linearized reading'  (duration: 116.512809ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T11:18:57.512875Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.141528ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T11:18:57.512929Z","caller":"traceutil/trace.go:172","msg":"trace[1756293633] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:397; }","duration":"122.203387ms","start":"2025-09-08T11:18:57.390714Z","end":"2025-09-08T11:18:57.512917Z","steps":["trace[1756293633] 'agreement among raft nodes before linearized reading'  (duration: 122.111997ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T11:18:57.536637Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.809226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4338"}
	{"level":"info","ts":"2025-09-08T11:18:57.543081Z","caller":"traceutil/trace.go:172","msg":"trace[1976768052] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:402; }","duration":"152.247827ms","start":"2025-09-08T11:18:57.390809Z","end":"2025-09-08T11:18:57.543057Z","steps":["trace[1976768052] 'agreement among raft nodes before linearized reading'  (duration: 145.78521ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T11:18:57.543469Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.711118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T11:18:57.543507Z","caller":"traceutil/trace.go:172","msg":"trace[53014831] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:402; }","duration":"152.765971ms","start":"2025-09-08T11:18:57.390733Z","end":"2025-09-08T11:18:57.543499Z","steps":["trace[53014831] 'agreement among raft nodes before linearized reading'  (duration: 152.686207ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:18:57.553953Z","caller":"traceutil/trace.go:172","msg":"trace[876336341] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"163.071714ms","start":"2025-09-08T11:18:57.390863Z","end":"2025-09-08T11:18:57.553935Z","steps":["trace[876336341] 'process raft request'  (duration: 122.215054ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:18:57.554191Z","caller":"traceutil/trace.go:172","msg":"trace[50860671] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"114.555926ms","start":"2025-09-08T11:18:57.439627Z","end":"2025-09-08T11:18:57.554183Z","steps":["trace[50860671] 'process raft request'  (duration: 73.526037ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:18:57.554339Z","caller":"traceutil/trace.go:172","msg":"trace[825584568] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"114.600119ms","start":"2025-09-08T11:18:57.439731Z","end":"2025-09-08T11:18:57.554331Z","steps":["trace[825584568] 'process raft request'  (duration: 73.452297ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:18:57.554427Z","caller":"traceutil/trace.go:172","msg":"trace[689895592] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"114.415723ms","start":"2025-09-08T11:18:57.440006Z","end":"2025-09-08T11:18:57.554421Z","steps":["trace[689895592] 'process raft request'  (duration: 81.202429ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T11:19:00.790649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:19:00.807574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:19:22.594174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:19:22.606808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:19:22.642962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:19:22.662870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36336","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:25:21 up  1:07,  0 users,  load average: 0.31, 1.35, 2.52
	Linux addons-953262 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [12b5c86fba9e1828dc70ca8ee31a1b4605d98ef8b02ae09567451ca1902d2fca] <==
	I0908 11:23:17.672817       1 main.go:301] handling current node
	I0908 11:23:27.672005       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:23:27.672037       1 main.go:301] handling current node
	I0908 11:23:37.676435       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:23:37.676471       1 main.go:301] handling current node
	I0908 11:23:47.679597       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:23:47.679633       1 main.go:301] handling current node
	I0908 11:23:57.677369       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:23:57.677489       1 main.go:301] handling current node
	I0908 11:24:07.677866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:24:07.677898       1 main.go:301] handling current node
	I0908 11:24:17.673147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:24:17.673191       1 main.go:301] handling current node
	I0908 11:24:27.679469       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:24:27.679590       1 main.go:301] handling current node
	I0908 11:24:37.675547       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:24:37.675581       1 main.go:301] handling current node
	I0908 11:24:47.680256       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:24:47.680290       1 main.go:301] handling current node
	I0908 11:24:57.673032       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:24:57.673146       1 main.go:301] handling current node
	I0908 11:25:07.671672       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:25:07.671799       1 main.go:301] handling current node
	I0908 11:25:17.671482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:25:17.671517       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4f167fdeec08da3d37f20897558c7ae1ad6c810e76c2fdb45b16fcb9613a6c6b] <==
	I0908 11:22:42.393608       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0908 11:22:44.476413       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0908 11:22:53.233401       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0908 11:22:58.693646       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0908 11:22:59.075080       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.213.168"}
	E0908 11:23:11.561024       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0908 11:23:12.483692       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 11:23:12.483880       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 11:23:12.520388       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 11:23:12.520442       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 11:23:12.534789       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 11:23:12.534914       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 11:23:12.548942       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 11:23:12.549344       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 11:23:12.579153       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 11:23:12.579190       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0908 11:23:12.595219       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W0908 11:23:13.536490       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0908 11:23:13.579800       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0908 11:23:13.704821       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0908 11:23:44.663626       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:23:58.321387       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:24:50.912550       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:25:05.570358       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:25:18.903091       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.124.60"}
	
	
	==> kube-controller-manager [f42e51f841ef47d05e33a00b618471df89a169bc1fca1483b7a1fdb293197313] <==
	I0908 11:23:22.725976       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:23:22.782024       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0908 11:23:22.782240       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0908 11:23:32.096521       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 11:23:32.097545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 11:23:33.873367       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 11:23:33.874428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 11:23:35.037005       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 11:23:35.038192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 11:23:47.344988       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 11:23:47.346217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 11:23:50.859791       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 11:23:50.861160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 11:23:54.281342       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 11:23:54.282552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 11:24:26.602082       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 11:24:26.603101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 11:24:31.122349       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 11:24:31.123750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 11:24:33.320283       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 11:24:33.321248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 11:25:03.537067       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 11:25:03.538149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 11:25:05.082242       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 11:25:05.083396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [418e07d77c93aa1714089789cfc7d8e5855a927cf8753bca09a7ea80d0e73b5e] <==
	I0908 11:18:58.392631       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:18:59.464027       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:18:59.746876       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:18:59.747001       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 11:18:59.747110       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:18:59.978057       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 11:18:59.989565       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:19:00.097588       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:19:00.098388       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:19:00.098415       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:19:00.100336       1 config.go:200] "Starting service config controller"
	I0908 11:19:00.100361       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:19:00.100384       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:19:00.100389       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:19:00.100405       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:19:00.100410       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:19:00.101112       1 config.go:309] "Starting node config controller"
	I0908 11:19:00.101121       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:19:00.101129       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:19:00.207367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:19:00.207473       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:19:00.243331       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e5ca213c2a64137f472133a5c7bb5daa81837ddf563b4ea6093aca20f1b74002] <==
	I0908 11:18:45.971320       1 serving.go:386] Generated self-signed cert in-memory
	W0908 11:18:47.258629       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 11:18:47.258663       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 11:18:47.258674       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 11:18:47.258683       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 11:18:47.286357       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:18:47.286388       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:18:47.288874       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 11:18:47.288992       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:18:47.289018       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:18:47.289043       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0908 11:18:47.304457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0908 11:18:48.489745       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 11:24:38 addons-953262 kubelet[1531]: E0908 11:24:38.981270    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4f3e834bacbc4c1b1dd53e1856cd85d261efb636bc0ab480c9e078bc636b7bb4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4f3e834bacbc4c1b1dd53e1856cd85d261efb636bc0ab480c9e078bc636b7bb4/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:39 addons-953262 kubelet[1531]: E0908 11:24:39.155663    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7e270ef421ae63a6e4d27e5d4152019a41154983a5a1ff070327eeb50646a7ee/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7e270ef421ae63a6e4d27e5d4152019a41154983a5a1ff070327eeb50646a7ee/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:39 addons-953262 kubelet[1531]: E0908 11:24:39.162929    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/92fbe3274cefdd65e33e57d19f26f36fe6b93bed453f1129cb6e0b694f63cfaf/diff" to get inode usage: stat /var/lib/containers/storage/overlay/92fbe3274cefdd65e33e57d19f26f36fe6b93bed453f1129cb6e0b694f63cfaf/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:39 addons-953262 kubelet[1531]: E0908 11:24:39.249655    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/91ff8e533e31a4dcbdf13a2e052946ba53c465d5911c9685f006271b6e565b4b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/91ff8e533e31a4dcbdf13a2e052946ba53c465d5911c9685f006271b6e565b4b/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:39 addons-953262 kubelet[1531]: E0908 11:24:39.320052    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/80722c73ad873b39157099006a6e9ef0afb8f9db7e8d07dc18b583cb87693206/diff" to get inode usage: stat /var/lib/containers/storage/overlay/80722c73ad873b39157099006a6e9ef0afb8f9db7e8d07dc18b583cb87693206/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:48 addons-953262 kubelet[1531]: E0908 11:24:48.770867    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/dffcfdb07ccb20febebbbec0d12e9c2b384550e2df8787712ad40dddcb758a55/diff" to get inode usage: stat /var/lib/containers/storage/overlay/dffcfdb07ccb20febebbbec0d12e9c2b384550e2df8787712ad40dddcb758a55/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:48 addons-953262 kubelet[1531]: E0908 11:24:48.781638    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7e270ef421ae63a6e4d27e5d4152019a41154983a5a1ff070327eeb50646a7ee/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7e270ef421ae63a6e4d27e5d4152019a41154983a5a1ff070327eeb50646a7ee/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:48 addons-953262 kubelet[1531]: E0908 11:24:48.814456    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/62408dc34571175062ff5e6771e80df6eb3c6d4400b982b44e286b6d25f35a87/diff" to get inode usage: stat /var/lib/containers/storage/overlay/62408dc34571175062ff5e6771e80df6eb3c6d4400b982b44e286b6d25f35a87/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:48 addons-953262 kubelet[1531]: E0908 11:24:48.815562    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/80722c73ad873b39157099006a6e9ef0afb8f9db7e8d07dc18b583cb87693206/diff" to get inode usage: stat /var/lib/containers/storage/overlay/80722c73ad873b39157099006a6e9ef0afb8f9db7e8d07dc18b583cb87693206/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:48 addons-953262 kubelet[1531]: E0908 11:24:48.821863    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/92fbe3274cefdd65e33e57d19f26f36fe6b93bed453f1129cb6e0b694f63cfaf/diff" to get inode usage: stat /var/lib/containers/storage/overlay/92fbe3274cefdd65e33e57d19f26f36fe6b93bed453f1129cb6e0b694f63cfaf/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:48 addons-953262 kubelet[1531]: E0908 11:24:48.827222    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/91ff8e533e31a4dcbdf13a2e052946ba53c465d5911c9685f006271b6e565b4b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/91ff8e533e31a4dcbdf13a2e052946ba53c465d5911c9685f006271b6e565b4b/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:48 addons-953262 kubelet[1531]: E0908 11:24:48.835132    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/62408dc34571175062ff5e6771e80df6eb3c6d4400b982b44e286b6d25f35a87/diff" to get inode usage: stat /var/lib/containers/storage/overlay/62408dc34571175062ff5e6771e80df6eb3c6d4400b982b44e286b6d25f35a87/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:24:48 addons-953262 kubelet[1531]: E0908 11:24:48.869811    1531 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757330688869399702 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 08 11:24:48 addons-953262 kubelet[1531]: E0908 11:24:48.869845    1531 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757330688869399702 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 08 11:24:58 addons-953262 kubelet[1531]: E0908 11:24:58.872852    1531 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757330698872578490 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 08 11:24:58 addons-953262 kubelet[1531]: E0908 11:24:58.872887    1531 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757330698872578490 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 08 11:25:02 addons-953262 kubelet[1531]: I0908 11:25:02.673712    1531 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 11:25:08 addons-953262 kubelet[1531]: E0908 11:25:08.875718    1531 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757330708875434109 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 08 11:25:08 addons-953262 kubelet[1531]: E0908 11:25:08.875753    1531 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757330708875434109 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 08 11:25:11 addons-953262 kubelet[1531]: E0908 11:25:11.989372    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/be92a7940440c3145550be38089a73ecdf0e469003719843fd307c06768dd4a4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/be92a7940440c3145550be38089a73ecdf0e469003719843fd307c06768dd4a4/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:25:15 addons-953262 kubelet[1531]: E0908 11:25:15.695084    1531 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/eae2c022eeb0e07ced3c419247cbbe79f5ec0407e3f09d7f97620f6e004c9420/diff" to get inode usage: stat /var/lib/containers/storage/overlay/eae2c022eeb0e07ced3c419247cbbe79f5ec0407e3f09d7f97620f6e004c9420/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:25:18 addons-953262 kubelet[1531]: I0908 11:25:18.804560    1531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tshp9\" (UniqueName: \"kubernetes.io/projected/ac54aff5-45af-4843-9f9a-322bbb34f3ab-kube-api-access-tshp9\") pod \"hello-world-app-5d498dc89-hqk9h\" (UID: \"ac54aff5-45af-4843-9f9a-322bbb34f3ab\") " pod="default/hello-world-app-5d498dc89-hqk9h"
	Sep 08 11:25:18 addons-953262 kubelet[1531]: E0908 11:25:18.878602    1531 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757330718878356052 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 08 11:25:18 addons-953262 kubelet[1531]: E0908 11:25:18.878644    1531 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757330718878356052 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 08 11:25:19 addons-953262 kubelet[1531]: W0908 11:25:19.086316    1531 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bce9f06c38431f433d7edcefeaac37bf9c8eb489d6b96707da46b76779f49221/crio-45366730e7d2c53aed67b0b10e4bba54d45fa669f59ac34aaf49591c978dac30 WatchSource:0}: Error finding container 45366730e7d2c53aed67b0b10e4bba54d45fa669f59ac34aaf49591c978dac30: Status 404 returned error can't find the container with id 45366730e7d2c53aed67b0b10e4bba54d45fa669f59ac34aaf49591c978dac30
	
	
	==> storage-provisioner [81bfd15e7836cd5ecbaed31975c3227b18639c6012321ad42ec90b8aa86d8dfd] <==
	W0908 11:24:57.282475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:24:59.286271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:24:59.292725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:01.296197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:01.300765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:03.306420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:03.310642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:05.313732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:05.320652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:07.323554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:07.328551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:09.331679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:09.336172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:11.339221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:11.343749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:13.347298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:13.352238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:15.355447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:15.361327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:17.364064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:17.368941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:19.373289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:19.378411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:21.383093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:25:21.395164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-953262 -n addons-953262
helpers_test.go:269: (dbg) Run:  kubectl --context addons-953262 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-l29nl ingress-nginx-admission-patch-x5kd7
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-953262 describe pod ingress-nginx-admission-create-l29nl ingress-nginx-admission-patch-x5kd7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-953262 describe pod ingress-nginx-admission-create-l29nl ingress-nginx-admission-patch-x5kd7: exit status 1 (92.374224ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l29nl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-x5kd7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-953262 describe pod ingress-nginx-admission-create-l29nl ingress-nginx-admission-patch-x5kd7: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-953262 addons disable ingress-dns --alsologtostderr -v=1: (1.560489711s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-953262 addons disable ingress --alsologtostderr -v=1: (7.81722573s)
--- FAIL: TestAddons/parallel/Ingress (153.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-594147 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-594147 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-jjn4t" [296a60c6-bfbb-4a5c-a218-b0b66133ebaa] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-594147 -n functional-594147
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-08 11:39:33.17568555 +0000 UTC m=+1311.795226812
functional_test.go:1645: (dbg) Run:  kubectl --context functional-594147 describe po hello-node-connect-7d85dfc575-jjn4t -n default
functional_test.go:1645: (dbg) kubectl --context functional-594147 describe po hello-node-connect-7d85dfc575-jjn4t -n default:
Name:             hello-node-connect-7d85dfc575-jjn4t
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-594147/192.168.49.2
Start Time:       Mon, 08 Sep 2025 11:29:32 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ght5b (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ght5b:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-jjn4t to functional-594147
Warning  Failed     7m13s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m13s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x19 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x20 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Normal   Pulling    4m30s (x6 over 10m)   kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-594147 logs hello-node-connect-7d85dfc575-jjn4t -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-594147 logs hello-node-connect-7d85dfc575-jjn4t -n default: exit status 1 (105.505757ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-jjn4t" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-594147 logs hello-node-connect-7d85dfc575-jjn4t -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-594147 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-jjn4t
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-594147/192.168.49.2
Start Time:       Mon, 08 Sep 2025 11:29:32 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ght5b (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ght5b:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-jjn4t to functional-594147
Warning  Failed     7m13s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m13s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x19 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x20 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Normal   Pulling    4m30s (x6 over 10m)   kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-594147 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-594147 logs -l app=hello-node-connect: exit status 1 (82.978486ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-jjn4t" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-594147 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-594147 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.196.37
IPs:                      10.99.196.37
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32454/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-594147
helpers_test.go:243: (dbg) docker inspect functional-594147:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "72c1b9678509d2fb61dbcb1b2042e0e3510514e5ff804b66fda8db4e9709f51e",
	        "Created": "2025-09-08T11:26:42.090104421Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313547,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T11:26:42.164373769Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/72c1b9678509d2fb61dbcb1b2042e0e3510514e5ff804b66fda8db4e9709f51e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/72c1b9678509d2fb61dbcb1b2042e0e3510514e5ff804b66fda8db4e9709f51e/hostname",
	        "HostsPath": "/var/lib/docker/containers/72c1b9678509d2fb61dbcb1b2042e0e3510514e5ff804b66fda8db4e9709f51e/hosts",
	        "LogPath": "/var/lib/docker/containers/72c1b9678509d2fb61dbcb1b2042e0e3510514e5ff804b66fda8db4e9709f51e/72c1b9678509d2fb61dbcb1b2042e0e3510514e5ff804b66fda8db4e9709f51e-json.log",
	        "Name": "/functional-594147",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-594147:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-594147",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "72c1b9678509d2fb61dbcb1b2042e0e3510514e5ff804b66fda8db4e9709f51e",
	                "LowerDir": "/var/lib/docker/overlay2/936856b46709a3ea01a6e3ff3f5435bf10583c7dee0d5a697cdd9704a3a33dd5-init/diff:/var/lib/docker/overlay2/12fba0b2ee9605b82319300b6c0948dcd651b92089cc7fe5af71d16143e72a6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/936856b46709a3ea01a6e3ff3f5435bf10583c7dee0d5a697cdd9704a3a33dd5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/936856b46709a3ea01a6e3ff3f5435bf10583c7dee0d5a697cdd9704a3a33dd5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/936856b46709a3ea01a6e3ff3f5435bf10583c7dee0d5a697cdd9704a3a33dd5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-594147",
	                "Source": "/var/lib/docker/volumes/functional-594147/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-594147",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-594147",
	                "name.minikube.sigs.k8s.io": "functional-594147",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f422ac82b83cbee64c06726a3edf891822f74bf261ff118a37878a288f038fb0",
	            "SandboxKey": "/var/run/docker/netns/f422ac82b83c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-594147": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:41:5e:7d:df:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "05e34e8bcefd51795afb1ef04be4f3b9a240f6f7b13734055ce2d7c71c5c1d23",
	                    "EndpointID": "c10baaff1f1d07a451d255acdb1739119c9c7d850183e165244f0a25392728f5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-594147",
	                        "72c1b9678509"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-594147 -n functional-594147
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 logs -n 25: (1.770217587s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-594147 image load --daemon kicbase/echo-server:functional-594147 --alsologtostderr                                                             │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ ssh     │ functional-594147 ssh sudo cat /usr/share/ca-certificates/295113.pem                                                                                      │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ ssh     │ functional-594147 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ ssh     │ functional-594147 ssh sudo cat /etc/ssl/certs/2951132.pem                                                                                                 │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ ssh     │ functional-594147 ssh sudo cat /usr/share/ca-certificates/2951132.pem                                                                                     │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ image   │ functional-594147 image ls                                                                                                                                │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ ssh     │ functional-594147 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ image   │ functional-594147 image load --daemon kicbase/echo-server:functional-594147 --alsologtostderr                                                             │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ ssh     │ functional-594147 ssh sudo cat /etc/test/nested/copy/295113/hosts                                                                                         │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ ssh     │ functional-594147 ssh echo hello                                                                                                                          │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ image   │ functional-594147 image ls                                                                                                                                │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ ssh     │ functional-594147 ssh cat /etc/hostname                                                                                                                   │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ tunnel  │ functional-594147 tunnel --alsologtostderr                                                                                                                │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │                     │
	│ tunnel  │ functional-594147 tunnel --alsologtostderr                                                                                                                │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │                     │
	│ image   │ functional-594147 image load --daemon kicbase/echo-server:functional-594147 --alsologtostderr                                                             │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ tunnel  │ functional-594147 tunnel --alsologtostderr                                                                                                                │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │                     │
	│ image   │ functional-594147 image ls                                                                                                                                │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ image   │ functional-594147 image save kicbase/echo-server:functional-594147 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ image   │ functional-594147 image rm kicbase/echo-server:functional-594147 --alsologtostderr                                                                        │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ image   │ functional-594147 image ls                                                                                                                                │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ image   │ functional-594147 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ image   │ functional-594147 image ls                                                                                                                                │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ image   │ functional-594147 image save --daemon kicbase/echo-server:functional-594147 --alsologtostderr                                                             │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ addons  │ functional-594147 addons list                                                                                                                             │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	│ addons  │ functional-594147 addons list -o json                                                                                                                     │ functional-594147 │ jenkins │ v1.36.0 │ 08 Sep 25 11:29 UTC │ 08 Sep 25 11:29 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:28:34
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:28:34.067700  318322 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:28:34.067880  318322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:28:34.067885  318322 out.go:374] Setting ErrFile to fd 2...
	I0908 11:28:34.067888  318322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:28:34.068391  318322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
	I0908 11:28:34.068860  318322 out.go:368] Setting JSON to false
	I0908 11:28:34.069919  318322 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4266,"bootTime":1757326648,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 11:28:34.069984  318322 start.go:140] virtualization:  
	I0908 11:28:34.073509  318322 out.go:179] * [functional-594147] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 11:28:34.077321  318322 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:28:34.077412  318322 notify.go:220] Checking for updates...
	I0908 11:28:34.083145  318322 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:28:34.086008  318322 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	I0908 11:28:34.088827  318322 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	I0908 11:28:34.091681  318322 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 11:28:34.094451  318322 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:28:34.097867  318322 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:28:34.097964  318322 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:28:34.129964  318322 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:28:34.130066  318322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:28:34.202207  318322 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 11:28:34.192199945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 11:28:34.202301  318322 docker.go:318] overlay module found
	I0908 11:28:34.205488  318322 out.go:179] * Using the docker driver based on existing profile
	I0908 11:28:34.208270  318322 start.go:304] selected driver: docker
	I0908 11:28:34.208281  318322 start.go:918] validating driver "docker" against &{Name:functional-594147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-594147 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:28:34.208364  318322 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:28:34.208468  318322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:28:34.268271  318322 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 11:28:34.259138245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 11:28:34.268684  318322 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:28:34.268700  318322 cni.go:84] Creating CNI manager for ""
	I0908 11:28:34.268758  318322 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:28:34.268796  318322 start.go:348] cluster config:
	{Name:functional-594147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-594147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:28:34.273999  318322 out.go:179] * Starting "functional-594147" primary control-plane node in "functional-594147" cluster
	I0908 11:28:34.276730  318322 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 11:28:34.283728  318322 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 11:28:34.286647  318322 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:28:34.286700  318322 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 11:28:34.286708  318322 cache.go:58] Caching tarball of preloaded images
	I0908 11:28:34.286731  318322 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 11:28:34.286800  318322 preload.go:172] Found /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0908 11:28:34.286809  318322 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 11:28:34.286928  318322 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/config.json ...
	I0908 11:28:34.311187  318322 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 11:28:34.311199  318322 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 11:28:34.311210  318322 cache.go:232] Successfully downloaded all kic artifacts
	I0908 11:28:34.311233  318322 start.go:360] acquireMachinesLock for functional-594147: {Name:mka3bdc944143552258e4eeb8cb7bfd16fb58965 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:28:34.311287  318322 start.go:364] duration metric: took 37.358µs to acquireMachinesLock for "functional-594147"
	I0908 11:28:34.311305  318322 start.go:96] Skipping create...Using existing machine configuration
	I0908 11:28:34.311310  318322 fix.go:54] fixHost starting: 
	I0908 11:28:34.311579  318322 cli_runner.go:164] Run: docker container inspect functional-594147 --format={{.State.Status}}
	I0908 11:28:34.328900  318322 fix.go:112] recreateIfNeeded on functional-594147: state=Running err=<nil>
	W0908 11:28:34.328921  318322 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 11:28:34.332159  318322 out.go:252] * Updating the running docker "functional-594147" container ...
	I0908 11:28:34.332182  318322 machine.go:93] provisionDockerMachine start ...
	I0908 11:28:34.332270  318322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
	I0908 11:28:34.357555  318322 main.go:141] libmachine: Using SSH client type: native
	I0908 11:28:34.357904  318322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I0908 11:28:34.357911  318322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:28:34.481223  318322 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-594147
	
	I0908 11:28:34.481238  318322 ubuntu.go:182] provisioning hostname "functional-594147"
	I0908 11:28:34.481313  318322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
	I0908 11:28:34.499800  318322 main.go:141] libmachine: Using SSH client type: native
	I0908 11:28:34.500110  318322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I0908 11:28:34.500118  318322 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-594147 && echo "functional-594147" | sudo tee /etc/hostname
	I0908 11:28:34.641967  318322 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-594147
	
	I0908 11:28:34.642036  318322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
	I0908 11:28:34.661866  318322 main.go:141] libmachine: Using SSH client type: native
	I0908 11:28:34.662190  318322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I0908 11:28:34.662222  318322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-594147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-594147/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-594147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:28:34.786127  318322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:28:34.786142  318322 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-293252/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-293252/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-293252/.minikube}
	I0908 11:28:34.786162  318322 ubuntu.go:190] setting up certificates
	I0908 11:28:34.786171  318322 provision.go:84] configureAuth start
	I0908 11:28:34.786242  318322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-594147
	I0908 11:28:34.805074  318322 provision.go:143] copyHostCerts
	I0908 11:28:34.805133  318322 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-293252/.minikube/ca.pem, removing ...
	I0908 11:28:34.805156  318322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-293252/.minikube/ca.pem
	I0908 11:28:34.805234  318322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-293252/.minikube/ca.pem (1078 bytes)
	I0908 11:28:34.805345  318322 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-293252/.minikube/cert.pem, removing ...
	I0908 11:28:34.805348  318322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-293252/.minikube/cert.pem
	I0908 11:28:34.805382  318322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-293252/.minikube/cert.pem (1123 bytes)
	I0908 11:28:34.805484  318322 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-293252/.minikube/key.pem, removing ...
	I0908 11:28:34.805488  318322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-293252/.minikube/key.pem
	I0908 11:28:34.805510  318322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-293252/.minikube/key.pem (1675 bytes)
	I0908 11:28:34.805555  318322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-293252/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca-key.pem org=jenkins.functional-594147 san=[127.0.0.1 192.168.49.2 functional-594147 localhost minikube]
	I0908 11:28:35.231391  318322 provision.go:177] copyRemoteCerts
	I0908 11:28:35.231445  318322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:28:35.231487  318322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
	I0908 11:28:35.249379  318322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
	I0908 11:28:35.347167  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 11:28:35.375578  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 11:28:35.401942  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 11:28:35.426967  318322 provision.go:87] duration metric: took 640.773165ms to configureAuth
	I0908 11:28:35.426985  318322 ubuntu.go:206] setting minikube options for container-runtime
	I0908 11:28:35.427183  318322 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:28:35.427285  318322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
	I0908 11:28:35.449249  318322 main.go:141] libmachine: Using SSH client type: native
	I0908 11:28:35.449551  318322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I0908 11:28:35.449563  318322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 11:28:40.858053  318322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 11:28:40.858065  318322 machine.go:96] duration metric: took 6.52587563s to provisionDockerMachine
	I0908 11:28:40.858074  318322 start.go:293] postStartSetup for "functional-594147" (driver="docker")
	I0908 11:28:40.858085  318322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:28:40.858153  318322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:28:40.858190  318322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
	I0908 11:28:40.876191  318322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
	I0908 11:28:40.967311  318322 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:28:40.970615  318322 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 11:28:40.970638  318322 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 11:28:40.970647  318322 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 11:28:40.970653  318322 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 11:28:40.970662  318322 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-293252/.minikube/addons for local assets ...
	I0908 11:28:40.970718  318322 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-293252/.minikube/files for local assets ...
	I0908 11:28:40.970798  318322 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-293252/.minikube/files/etc/ssl/certs/2951132.pem -> 2951132.pem in /etc/ssl/certs
	I0908 11:28:40.970874  318322 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-293252/.minikube/files/etc/test/nested/copy/295113/hosts -> hosts in /etc/test/nested/copy/295113
	I0908 11:28:40.970917  318322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/295113
	I0908 11:28:40.980351  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/files/etc/ssl/certs/2951132.pem --> /etc/ssl/certs/2951132.pem (1708 bytes)
	I0908 11:28:41.005629  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/files/etc/test/nested/copy/295113/hosts --> /etc/test/nested/copy/295113/hosts (40 bytes)
	I0908 11:28:41.033866  318322 start.go:296] duration metric: took 175.77533ms for postStartSetup
	I0908 11:28:41.033954  318322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:28:41.034042  318322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
	I0908 11:28:41.050860  318322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
	I0908 11:28:41.138853  318322 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 11:28:41.143689  318322 fix.go:56] duration metric: took 6.832371675s for fixHost
	I0908 11:28:41.143705  318322 start.go:83] releasing machines lock for "functional-594147", held for 6.832410789s
	I0908 11:28:41.143770  318322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-594147
	I0908 11:28:41.161040  318322 ssh_runner.go:195] Run: cat /version.json
	I0908 11:28:41.161077  318322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 11:28:41.161082  318322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
	I0908 11:28:41.161136  318322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
	I0908 11:28:41.188484  318322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
	I0908 11:28:41.190202  318322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
	I0908 11:28:41.282440  318322 ssh_runner.go:195] Run: systemctl --version
	I0908 11:28:41.428068  318322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 11:28:41.572051  318322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 11:28:41.576512  318322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:28:41.585610  318322 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 11:28:41.585678  318322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:28:41.594892  318322 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 11:28:41.594906  318322 start.go:495] detecting cgroup driver to use...
	I0908 11:28:41.594938  318322 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 11:28:41.594986  318322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:28:41.607814  318322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:28:41.619963  318322 docker.go:218] disabling cri-docker service (if available) ...
	I0908 11:28:41.620018  318322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 11:28:41.634018  318322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 11:28:41.645986  318322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 11:28:41.767680  318322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 11:28:41.887454  318322 docker.go:234] disabling docker service ...
	I0908 11:28:41.887515  318322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 11:28:41.901213  318322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 11:28:41.913672  318322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 11:28:42.031712  318322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 11:28:42.166902  318322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:28:42.183311  318322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:28:42.205407  318322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 11:28:42.205490  318322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:28:42.218480  318322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 11:28:42.218559  318322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:28:42.231008  318322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:28:42.242902  318322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:28:42.254595  318322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:28:42.265115  318322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:28:42.279546  318322 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:28:42.293054  318322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:28:42.307817  318322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:28:42.317060  318322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:28:42.326125  318322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:28:42.440746  318322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 11:28:46.369258  318322 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.928489185s)
	I0908 11:28:46.369274  318322 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 11:28:46.369325  318322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 11:28:46.373232  318322 start.go:563] Will wait 60s for crictl version
	I0908 11:28:46.373285  318322 ssh_runner.go:195] Run: which crictl
	I0908 11:28:46.376799  318322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:28:46.416839  318322 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 11:28:46.416926  318322 ssh_runner.go:195] Run: crio --version
	I0908 11:28:46.456328  318322 ssh_runner.go:195] Run: crio --version
	I0908 11:28:46.504151  318322 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 11:28:46.507054  318322 cli_runner.go:164] Run: docker network inspect functional-594147 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 11:28:46.522335  318322 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 11:28:46.529024  318322 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0908 11:28:46.531825  318322 kubeadm.go:875] updating cluster {Name:functional-594147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-594147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:28:46.531946  318322 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:28:46.532017  318322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:28:46.583317  318322 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:28:46.583329  318322 crio.go:433] Images already preloaded, skipping extraction
	I0908 11:28:46.583384  318322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:28:46.621973  318322 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:28:46.621985  318322 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:28:46.621991  318322 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0908 11:28:46.622080  318322 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-594147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-594147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:28:46.622154  318322 ssh_runner.go:195] Run: crio config
	I0908 11:28:46.688232  318322 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0908 11:28:46.688252  318322 cni.go:84] Creating CNI manager for ""
	I0908 11:28:46.688261  318322 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:28:46.688268  318322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 11:28:46.688295  318322 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-594147 NodeName:functional-594147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:28:46.688446  318322 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-594147"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:28:46.688514  318322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:28:46.697844  318322 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:28:46.697904  318322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 11:28:46.706909  318322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0908 11:28:46.724783  318322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:28:46.743214  318322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0908 11:28:46.761605  318322 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 11:28:46.765638  318322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:28:46.884535  318322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:28:46.897218  318322 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147 for IP: 192.168.49.2
	I0908 11:28:46.897230  318322 certs.go:194] generating shared ca certs ...
	I0908 11:28:46.897255  318322 certs.go:226] acquiring lock for ca certs: {Name:mkec8a5dd4303f23225e4d611fe7863c5eaee420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:28:46.897388  318322 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-293252/.minikube/ca.key
	I0908 11:28:46.897435  318322 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-293252/.minikube/proxy-client-ca.key
	I0908 11:28:46.897441  318322 certs.go:256] generating profile certs ...
	I0908 11:28:46.897533  318322 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.key
	I0908 11:28:46.897577  318322 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/apiserver.key.85191377
	I0908 11:28:46.897616  318322 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/proxy-client.key
	I0908 11:28:46.897743  318322 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/295113.pem (1338 bytes)
	W0908 11:28:46.897862  318322 certs.go:480] ignoring /home/jenkins/minikube-integration/21512-293252/.minikube/certs/295113_empty.pem, impossibly tiny 0 bytes
	I0908 11:28:46.897873  318322 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 11:28:46.897898  318322 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/ca.pem (1078 bytes)
	I0908 11:28:46.897919  318322 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/cert.pem (1123 bytes)
	I0908 11:28:46.897942  318322 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-293252/.minikube/certs/key.pem (1675 bytes)
	I0908 11:28:46.897990  318322 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-293252/.minikube/files/etc/ssl/certs/2951132.pem (1708 bytes)
	I0908 11:28:46.898634  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:28:46.927573  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 11:28:46.953886  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:28:46.979348  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 11:28:47.004566  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 11:28:47.032419  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 11:28:47.060555  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:28:47.085371  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 11:28:47.110600  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/certs/295113.pem --> /usr/share/ca-certificates/295113.pem (1338 bytes)
	I0908 11:28:47.135664  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/files/etc/ssl/certs/2951132.pem --> /usr/share/ca-certificates/2951132.pem (1708 bytes)
	I0908 11:28:47.160921  318322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-293252/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:28:47.186076  318322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:28:47.204658  318322 ssh_runner.go:195] Run: openssl version
	I0908 11:28:47.210574  318322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951132.pem && ln -fs /usr/share/ca-certificates/2951132.pem /etc/ssl/certs/2951132.pem"
	I0908 11:28:47.220552  318322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951132.pem
	I0908 11:28:47.224247  318322 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 11:26 /usr/share/ca-certificates/2951132.pem
	I0908 11:28:47.224305  318322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951132.pem
	I0908 11:28:47.231321  318322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951132.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:28:47.240739  318322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:28:47.251192  318322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:28:47.255259  318322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:18 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:28:47.255321  318322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:28:47.266605  318322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:28:47.275957  318322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295113.pem && ln -fs /usr/share/ca-certificates/295113.pem /etc/ssl/certs/295113.pem"
	I0908 11:28:47.286040  318322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295113.pem
	I0908 11:28:47.289725  318322 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 11:26 /usr/share/ca-certificates/295113.pem
	I0908 11:28:47.289813  318322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295113.pem
	I0908 11:28:47.297147  318322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295113.pem /etc/ssl/certs/51391683.0"
	I0908 11:28:47.306545  318322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:28:47.310459  318322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 11:28:47.317402  318322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 11:28:47.324630  318322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 11:28:47.331596  318322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 11:28:47.338627  318322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 11:28:47.345730  318322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 11:28:47.352710  318322 kubeadm.go:392] StartCluster: {Name:functional-594147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-594147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:28:47.352788  318322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 11:28:47.352850  318322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:28:47.392995  318322 cri.go:89] found id: "7e1de6d080e187db12adbb8fb32a87db77aeb16981890669063a999a3a3c1be9"
	I0908 11:28:47.393015  318322 cri.go:89] found id: "e0392ab8b6d822769d54180561cabd0c50e36cda6f41b4e2056fb5fcfa9e252a"
	I0908 11:28:47.393019  318322 cri.go:89] found id: "0af89f785285b5e49b161034a4f9d66bb2ceaf90e5d9ae982e81e652326b40d5"
	I0908 11:28:47.393022  318322 cri.go:89] found id: "3f830952b8e0daabc98355866ad4b4d5f9b39d682a6894a4392c507aa1bf2a82"
	I0908 11:28:47.393024  318322 cri.go:89] found id: "8b50fa1e8e0d8b5cf723971b02055ff490cc6f7aedad372f51777423cde959a5"
	I0908 11:28:47.393027  318322 cri.go:89] found id: "625a0e0246385bcc28e0247d9cd996cfa9b524983f2e26f2f29031e7eebf810b"
	I0908 11:28:47.393029  318322 cri.go:89] found id: "5fd50f10cabe0853e3b30126f8f971d10e648428e9bdaafe49d28056ddd7f6ec"
	I0908 11:28:47.393032  318322 cri.go:89] found id: "b6baa2e9d421de35e380188c46f5c9d0418056a3e3310d5e74242efa5c9ba26e"
	I0908 11:28:47.393034  318322 cri.go:89] found id: ""
	I0908 11:28:47.393091  318322 ssh_runner.go:195] Run: sudo runc list -f json
	I0908 11:28:47.416978  318322 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0af89f785285b5e49b161034a4f9d66bb2ceaf90e5d9ae982e81e652326b40d5","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0af89f785285b5e49b161034a4f9d66bb2ceaf90e5d9ae982e81e652326b40d5/userdata","rootfs":"/var/lib/containers/storage/overlay/02909838deb56da6b609b3f8a93ea96a9ee41b23569d28ef8c0198066654dc21/merged","created":"2025-09-08T11:28:06.526290352Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"nam
e\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0af89f785285b5e49b161034a4f9d66bb2ceaf90e5d9ae982e81e652326b40d5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-08T11:28:06.325113769Z","io.kubernetes.cri-o.Image":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-594147\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"76cf820
028fa0b2bed7e2de018547e7f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-594147_76cf820028fa0b2bed7e2de018547e7f/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/02909838deb56da6b609b3f8a93ea96a9ee41b23569d28ef8c0198066654dc21/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-594147_kube-system_76cf820028fa0b2bed7e2de018547e7f_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5980b15ac44f8d369a7c42ca8d496c2f5db6813ff72c3617494db0d042cc1629/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5980b15ac44f8d369a7c42ca8d496c2f5db6813ff72c3617494db0d042cc1629","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-594147_kube-system_76cf820028fa0b2bed7e2de018547e7f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Vol
umes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/76cf820028fa0b2bed7e2de018547e7f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/76cf820028fa0b2bed7e2de018547e7f/containers/etcd/df07445c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-594147","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"76cf820028fa0b2bed7e2de018547e7f","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"
76cf820028fa0b2bed7e2de018547e7f","kubernetes.io/config.seen":"2025-09-08T11:26:58.531747023Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f830952b8e0daabc98355866ad4b4d5f9b39d682a6894a4392c507aa1bf2a82","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3f830952b8e0daabc98355866ad4b4d5f9b39d682a6894a4392c507aa1bf2a82/userdata","rootfs":"/var/lib/containers/storage/overlay/8e651ad3237e1959358993ef5fdaf03ce9d21bcd1aaf765843927d0d9d8a2c64/merged","created":"2025-09-08T11:28:06.444648095Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.c
ri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3f830952b8e0daabc98355866ad4b4d5f9b39d682a6894a4392c507aa1bf2a82","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-08T11:28:06.306619072Z","io.kubernetes.cri-o.Image":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-control
ler-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-594147\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7b95d2e30a8d57508ad826356cffe14a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-594147_7b95d2e30a8d57508ad826356cffe14a/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8e651ad3237e1959358993ef5fdaf03ce9d21bcd1aaf765843927d0d9d8a2c64/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-594147_kube-system_7b95d2e30a8d57508ad826356cffe14a_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2e906b05a3f64efa1e99852738437395a69714e9413a27f8cdee8957d6fd6f7a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2e906b05a3f64efa1e99852738437395a69714e9413a27f8cdee8957d6fd6f7a","io.kubernetes.cri-o.Sa
ndboxName":"k8s_kube-controller-manager-functional-594147_kube-system_7b95d2e30a8d57508ad826356cffe14a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7b95d2e30a8d57508ad826356cffe14a/containers/kube-controller-manager/d5200a99\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7b95d2e30a8d57508ad826356cffe14a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/cont
roller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-594147","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"
7b95d2e30a8d57508ad826356cffe14a","kubernetes.io/config.hash":"7b95d2e30a8d57508ad826356cffe14a","kubernetes.io/config.seen":"2025-09-08T11:26:58.531754030Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5fd50f10cabe0853e3b30126f8f971d10e648428e9bdaafe49d28056ddd7f6ec","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/5fd50f10cabe0853e3b30126f8f971d10e648428e9bdaafe49d28056ddd7f6ec/userdata","rootfs":"/var/lib/containers/storage/overlay/ca6c7baa4915fbd091964dac315aea775e5e7e6a24070d8bf7496e90f9791db0/merged","created":"2025-09-08T11:28:06.285122946Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.conta
iner.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5fd50f10cabe0853e3b30126f8f971d10e648428e9bdaafe49d28056ddd7f6ec","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-08T11:28:06.217086936Z","io.kubernetes.cri-o.Image":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","io.kubernetes.cri-o.Labels":"{\"
io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-594147\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"633311d3ff2cc611850959d13cb7c7e8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-594147_633311d3ff2cc611850959d13cb7c7e8/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ca6c7baa4915fbd091964dac315aea775e5e7e6a24070d8bf7496e90f9791db0/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-594147_kube-system_633311d3ff2cc611850959d13cb7c7e8_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0a7dfefeea9045e5f35b9902ccc2cde66b4891c442a79f9a0ac0f8dc3c51e82a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0a7dfefeea9045e5f35b9902ccc2cde66b4891c442a79f9a0ac0f8dc3c51e82a","io.kubernetes.cri-o.SandboxName":"k8s_ku
be-scheduler-functional-594147_kube-system_633311d3ff2cc611850959d13cb7c7e8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/633311d3ff2cc611850959d13cb7c7e8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/633311d3ff2cc611850959d13cb7c7e8/containers/kube-scheduler/f43442ab\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-594147","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"633311d3ff2cc6
11850959d13cb7c7e8","kubernetes.io/config.hash":"633311d3ff2cc611850959d13cb7c7e8","kubernetes.io/config.seen":"2025-09-08T11:26:58.531755072Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"625a0e0246385bcc28e0247d9cd996cfa9b524983f2e26f2f29031e7eebf810b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/625a0e0246385bcc28e0247d9cd996cfa9b524983f2e26f2f29031e7eebf810b/userdata","rootfs":"/var/lib/containers/storage/overlay/69476d108879d453e7bcadb61d41f024f93b97d3f8cfae6be2c3c36eb71f3871/merged","created":"2025-09-08T11:28:06.340525668Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.termination
MessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8441,\\\"containerPort\\\":8441,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"625a0e0246385bcc28e0247d9cd996cfa9b524983f2e26f2f29031e7eebf810b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-08T11:28:06.256631936Z","io.kubernetes.cri-o.Image":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.cont
ainer.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-594147\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6ba38fde3610e8d17d1e3f6d7d6611f1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-594147_6ba38fde3610e8d17d1e3f6d7d6611f1/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/69476d108879d453e7bcadb61d41f024f93b97d3f8cfae6be2c3c36eb71f3871/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-594147_kube-system_6ba38fde3610e8d17d1e3f6d7d6611f1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a5d06eeedf3e350e6b68b30c596c42b93b06a3f0fc2f2e986801ea4019e97568/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a5d06eeedf3e350e6b68b30c596c42b93b06a3f0fc2f2e986801ea4019e97568","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-funct
ional-594147_kube-system_6ba38fde3610e8d17d1e3f6d7d6611f1_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6ba38fde3610e8d17d1e3f6d7d6611f1/containers/kube-apiserver/f434f653\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6ba38fde3610e8d17d1e3f6d7d6611f1/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",
\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-594147","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6ba38fde3610e8d17d1e3f6d7d6611f1","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"6ba38fde3610e8d17d1e3f6d7d6611f1","kubernetes.io/config.seen":"2025-09-08T11:26:58.531752651Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7e1de6d080e187db12adbb8fb32a87db77aeb16981890669063a999a3a3c1be9","pid":0,"status":"stopped","bundle":"/run/containers/storage/o
verlay-containers/7e1de6d080e187db12adbb8fb32a87db77aeb16981890669063a999a3a3c1be9/userdata","rootfs":"/var/lib/containers/storage/overlay/ae31ddfdc8690abf044c85f0460dbbda3f0d46fc85d51e8678e94a0ab58e8130/merged","created":"2025-09-08T11:28:06.787938848Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e2e56a4","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e2e56a4\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7e1de6d080e187db12adbb8fb32a87db77aeb16981890669063a999a3a3c1be9","io.kubernetes.cri-o.ContainerType":"contain
er","io.kubernetes.cri-o.Created":"2025-09-08T11:28:06.385606221Z","io.kubernetes.cri-o.Image":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.34.0","io.kubernetes.cri-o.ImageRef":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-rbn7h\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f6232c8f-0fe1-4d78-a3c2-d0342474ea33\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-rbn7h_f6232c8f-0fe1-4d78-a3c2-d0342474ea33/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ae31ddfdc8690abf044c85f0460dbbda3f0d46fc85d51e8678e94a0ab58e8130/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-rbn7h_kube-system_f6232c8f-0fe1-4d78-a3c2-d0342474ea
33_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/6ab4a2aaf436bae83091b06ba0f78a71435bc67e375133bce220efbff8812be8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6ab4a2aaf436bae83091b06ba0f78a71435bc67e375133bce220efbff8812be8","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-rbn7h_kube-system_f6232c8f-0fe1-4d78-a3c2-d0342474ea33_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f6232c8f-0fe1-4d78-a3c2-d0342474ea33/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\
"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f6232c8f-0fe1-4d78-a3c2-d0342474ea33/containers/kube-proxy/0b4417ce\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/f6232c8f-0fe1-4d78-a3c2-d0342474ea33/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f6232c8f-0fe1-4d78-a3c2-d0342474ea33/volumes/kubernetes.io~projected/kube-api-access-g526b\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-rbn7h","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f6232c8f-0fe1-4d78-a3c2-d0342474ea33","kubernetes.io/config.seen":"2025-09-08T11:27:12.430859068Z","kubernetes.io/config.source":"api"},"owner":"root"},{"o
ciVersion":"1.0.2-dev","id":"8b50fa1e8e0d8b5cf723971b02055ff490cc6f7aedad372f51777423cde959a5","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8b50fa1e8e0d8b5cf723971b02055ff490cc6f7aedad372f51777423cde959a5/userdata","rootfs":"/var/lib/containers/storage/overlay/a58d1f455b1248e2b283cb690fa434c8fc174a3d0a6753750353b1ee5c5f16a3/merged","created":"2025-09-08T11:28:06.556006299Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"127fdb84","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"127fdb84\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePer
iod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8b50fa1e8e0d8b5cf723971b02055ff490cc6f7aedad372f51777423cde959a5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-08T11:28:06.277293884Z","io.kubernetes.cri-o.Image":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri-o.ImageRef":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-k58dk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"da1e9838-fcc3-4604-9f50-0556868f34ba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-k58dk_da1e9838-fcc3-4604-9f50-0556868f34ba/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a58d1f455b1
248e2b283cb690fa434c8fc174a3d0a6753750353b1ee5c5f16a3/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-k58dk_kube-system_da1e9838-fcc3-4604-9f50-0556868f34ba_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1115b1d7af1b25f740fa4001913e44a5b2fe9390f8dd405439f6475f03beb3c1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1115b1d7af1b25f740fa4001913e44a5b2fe9390f8dd405439f6475f03beb3c1","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-k58dk_kube-system_da1e9838-fcc3-4604-9f50-0556868f34ba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/et
c/hosts\",\"host_path\":\"/var/lib/kubelet/pods/da1e9838-fcc3-4604-9f50-0556868f34ba/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/da1e9838-fcc3-4604-9f50-0556868f34ba/containers/kindnet-cni/00fddd10\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/da1e9838-fcc3-4604-9f50-0556868f34ba/volumes/kubernetes.io~projected/kube-api-access-wc7s8\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-k58dk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"da1e9838-fcc3-4604-9f50-0556868f34ba","kubernetes.io/config.seen":"2025-09-08T11:27:1
2.485431962Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b6baa2e9d421de35e380188c46f5c9d0418056a3e3310d5e74242efa5c9ba26e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b6baa2e9d421de35e380188c46f5c9d0418056a3e3310d5e74242efa5c9ba26e/userdata","rootfs":"/var/lib/containers/storage/overlay/8b7663af8fab6b28f414ab1e70fd7b8707d5a3d48769eccb86469123cd1a9db1/merged","created":"2025-09-08T11:28:06.258784631Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9bf792","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCoun
t":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9bf792\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"liveness-probe\\\",\\\"containerPort\\\":8080,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"readiness-probe\\\",\\\"containerPort\\\":8181,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b6baa2e9d421de35e380188c46f5c9d0418056a3e3310d5e74
242efa5c9ba26e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-08T11:28:06.19677126Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri-o.ImageRef":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bc5c9577-67nz9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8435cecc-839a-46c1-a76a-706831c8627d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bc5c9577-67nz9_8435cecc-839a-46c1-a76a-706831c8627d/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8b7663af8fab6b28f414ab1e70fd7b8707d5a3d48769eccb86469123cd1a9db1/mer
ged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bc5c9577-67nz9_kube-system_8435cecc-839a-46c1-a76a-706831c8627d_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4bea0990f70f44c0db7f96af2937efd86cfa0245d418782238c4f0cc47f7a88a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4bea0990f70f44c0db7f96af2937efd86cfa0245d418782238c4f0cc47f7a88a","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bc5c9577-67nz9_kube-system_8435cecc-839a-46c1-a76a-706831c8627d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/8435cecc-839a-46c1-a76a-706831c8627d/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8435cecc-839a-46c1-a76a-706831c8627d/etc-
hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8435cecc-839a-46c1-a76a-706831c8627d/containers/coredns/12a41159\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/8435cecc-839a-46c1-a76a-706831c8627d/volumes/kubernetes.io~projected/kube-api-access-6qht2\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bc5c9577-67nz9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8435cecc-839a-46c1-a76a-706831c8627d","kubernetes.io/config.seen":"2025-09-08T11:27:53.335015666Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e0392ab8b6d822769d54180561cabd0c50e36cda6f41b4e2056fb5fcfa9e252a","pid":0,"status":"stopped","bundle":"/run/containers/
storage/overlay-containers/e0392ab8b6d822769d54180561cabd0c50e36cda6f41b4e2056fb5fcfa9e252a/userdata","rootfs":"/var/lib/containers/storage/overlay/3efb7b99633272a506b0851decddc0db0391a19d2999316f94272e10ac22d058/merged","created":"2025-09-08T11:28:07.113981597Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e0392ab8b6d822769d54180561cabd0c50e36cda6f41b4e2056fb5fcfa9e252a","io.kubernetes.cri-o.Con
tainerType":"container","io.kubernetes.cri-o.Created":"2025-09-08T11:28:06.343905663Z","io.kubernetes.cri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"265f5f0f-6ea1-4e42-af86-f48e3f73fd9d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_265f5f0f-6ea1-4e42-af86-f48e3f73fd9d/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3efb7b99633272a506b0851decddc0db0391a19d2999316f94272e10ac22d058/merged","io.kubernetes.cri-o.Name":"k8s_storage-provi
sioner_storage-provisioner_kube-system_265f5f0f-6ea1-4e42-af86-f48e3f73fd9d_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0a83f72c8ce43634898a64df24c04a2993284de501de23b3c1c6b857cb75cd91/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0a83f72c8ce43634898a64df24c04a2993284de501de23b3c1c6b857cb75cd91","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_265f5f0f-6ea1-4e42-af86-f48e3f73fd9d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/265f5f0f-6ea1-4e42-af86-f48e3f73fd9d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/
265f5f0f-6ea1-4e42-af86-f48e3f73fd9d/containers/storage-provisioner/404efced\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/265f5f0f-6ea1-4e42-af86-f48e3f73fd9d/volumes/kubernetes.io~projected/kube-api-access-7jz6x\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"265f5f0f-6ea1-4e42-af86-f48e3f73fd9d","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisione
r:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2025-09-08T11:27:53.329553051Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0908 11:28:47.417563  318322 cri.go:126] list returned 8 containers
	I0908 11:28:47.417572  318322 cri.go:129] container: {ID:0af89f785285b5e49b161034a4f9d66bb2ceaf90e5d9ae982e81e652326b40d5 Status:stopped}
	I0908 11:28:47.417584  318322 cri.go:135] skipping {0af89f785285b5e49b161034a4f9d66bb2ceaf90e5d9ae982e81e652326b40d5 stopped}: state = "stopped", want "paused"
	I0908 11:28:47.417592  318322 cri.go:129] container: {ID:3f830952b8e0daabc98355866ad4b4d5f9b39d682a6894a4392c507aa1bf2a82 Status:stopped}
	I0908 11:28:47.417597  318322 cri.go:135] skipping {3f830952b8e0daabc98355866ad4b4d5f9b39d682a6894a4392c507aa1bf2a82 stopped}: state = "stopped", want "paused"
	I0908 11:28:47.417601  318322 cri.go:129] container: {ID:5fd50f10cabe0853e3b30126f8f971d10e648428e9bdaafe49d28056ddd7f6ec Status:stopped}
	I0908 11:28:47.417608  318322 cri.go:135] skipping {5fd50f10cabe0853e3b30126f8f971d10e648428e9bdaafe49d28056ddd7f6ec stopped}: state = "stopped", want "paused"
	I0908 11:28:47.417612  318322 cri.go:129] container: {ID:625a0e0246385bcc28e0247d9cd996cfa9b524983f2e26f2f29031e7eebf810b Status:stopped}
	I0908 11:28:47.417617  318322 cri.go:135] skipping {625a0e0246385bcc28e0247d9cd996cfa9b524983f2e26f2f29031e7eebf810b stopped}: state = "stopped", want "paused"
	I0908 11:28:47.417621  318322 cri.go:129] container: {ID:7e1de6d080e187db12adbb8fb32a87db77aeb16981890669063a999a3a3c1be9 Status:stopped}
	I0908 11:28:47.417624  318322 cri.go:135] skipping {7e1de6d080e187db12adbb8fb32a87db77aeb16981890669063a999a3a3c1be9 stopped}: state = "stopped", want "paused"
	I0908 11:28:47.417649  318322 cri.go:129] container: {ID:8b50fa1e8e0d8b5cf723971b02055ff490cc6f7aedad372f51777423cde959a5 Status:stopped}
	I0908 11:28:47.417654  318322 cri.go:135] skipping {8b50fa1e8e0d8b5cf723971b02055ff490cc6f7aedad372f51777423cde959a5 stopped}: state = "stopped", want "paused"
	I0908 11:28:47.417658  318322 cri.go:129] container: {ID:b6baa2e9d421de35e380188c46f5c9d0418056a3e3310d5e74242efa5c9ba26e Status:stopped}
	I0908 11:28:47.417662  318322 cri.go:135] skipping {b6baa2e9d421de35e380188c46f5c9d0418056a3e3310d5e74242efa5c9ba26e stopped}: state = "stopped", want "paused"
	I0908 11:28:47.417666  318322 cri.go:129] container: {ID:e0392ab8b6d822769d54180561cabd0c50e36cda6f41b4e2056fb5fcfa9e252a Status:stopped}
	I0908 11:28:47.417670  318322 cri.go:135] skipping {e0392ab8b6d822769d54180561cabd0c50e36cda6f41b4e2056fb5fcfa9e252a stopped}: state = "stopped", want "paused"
	I0908 11:28:47.417735  318322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 11:28:47.427187  318322 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 11:28:47.427196  318322 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 11:28:47.427254  318322 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 11:28:47.438098  318322 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:28:47.438700  318322 kubeconfig.go:125] found "functional-594147" server: "https://192.168.49.2:8441"
	I0908 11:28:47.439992  318322 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 11:28:47.449298  318322 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-09-08 11:26:49.298790643 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-09-08 11:28:46.754103852 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0908 11:28:47.449308  318322 kubeadm.go:1152] stopping kube-system containers ...
	I0908 11:28:47.449321  318322 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0908 11:28:47.449376  318322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:28:47.487160  318322 cri.go:89] found id: "7e1de6d080e187db12adbb8fb32a87db77aeb16981890669063a999a3a3c1be9"
	I0908 11:28:47.487171  318322 cri.go:89] found id: "e0392ab8b6d822769d54180561cabd0c50e36cda6f41b4e2056fb5fcfa9e252a"
	I0908 11:28:47.487175  318322 cri.go:89] found id: "0af89f785285b5e49b161034a4f9d66bb2ceaf90e5d9ae982e81e652326b40d5"
	I0908 11:28:47.487178  318322 cri.go:89] found id: "3f830952b8e0daabc98355866ad4b4d5f9b39d682a6894a4392c507aa1bf2a82"
	I0908 11:28:47.487182  318322 cri.go:89] found id: "8b50fa1e8e0d8b5cf723971b02055ff490cc6f7aedad372f51777423cde959a5"
	I0908 11:28:47.487187  318322 cri.go:89] found id: "625a0e0246385bcc28e0247d9cd996cfa9b524983f2e26f2f29031e7eebf810b"
	I0908 11:28:47.487189  318322 cri.go:89] found id: "5fd50f10cabe0853e3b30126f8f971d10e648428e9bdaafe49d28056ddd7f6ec"
	I0908 11:28:47.487191  318322 cri.go:89] found id: "b6baa2e9d421de35e380188c46f5c9d0418056a3e3310d5e74242efa5c9ba26e"
	I0908 11:28:47.487193  318322 cri.go:89] found id: ""
	I0908 11:28:47.487197  318322 cri.go:252] Stopping containers: [7e1de6d080e187db12adbb8fb32a87db77aeb16981890669063a999a3a3c1be9 e0392ab8b6d822769d54180561cabd0c50e36cda6f41b4e2056fb5fcfa9e252a 0af89f785285b5e49b161034a4f9d66bb2ceaf90e5d9ae982e81e652326b40d5 3f830952b8e0daabc98355866ad4b4d5f9b39d682a6894a4392c507aa1bf2a82 8b50fa1e8e0d8b5cf723971b02055ff490cc6f7aedad372f51777423cde959a5 625a0e0246385bcc28e0247d9cd996cfa9b524983f2e26f2f29031e7eebf810b 5fd50f10cabe0853e3b30126f8f971d10e648428e9bdaafe49d28056ddd7f6ec b6baa2e9d421de35e380188c46f5c9d0418056a3e3310d5e74242efa5c9ba26e]
	I0908 11:28:47.487251  318322 ssh_runner.go:195] Run: which crictl
	I0908 11:28:47.491112  318322 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 7e1de6d080e187db12adbb8fb32a87db77aeb16981890669063a999a3a3c1be9 e0392ab8b6d822769d54180561cabd0c50e36cda6f41b4e2056fb5fcfa9e252a 0af89f785285b5e49b161034a4f9d66bb2ceaf90e5d9ae982e81e652326b40d5 3f830952b8e0daabc98355866ad4b4d5f9b39d682a6894a4392c507aa1bf2a82 8b50fa1e8e0d8b5cf723971b02055ff490cc6f7aedad372f51777423cde959a5 625a0e0246385bcc28e0247d9cd996cfa9b524983f2e26f2f29031e7eebf810b 5fd50f10cabe0853e3b30126f8f971d10e648428e9bdaafe49d28056ddd7f6ec b6baa2e9d421de35e380188c46f5c9d0418056a3e3310d5e74242efa5c9ba26e
	I0908 11:28:47.570622  318322 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 11:28:47.685153  318322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 11:28:47.695571  318322 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Sep  8 11:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Sep  8 11:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Sep  8 11:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Sep  8 11:26 /etc/kubernetes/scheduler.conf
	
	I0908 11:28:47.695631  318322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0908 11:28:47.706375  318322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0908 11:28:47.715070  318322 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:28:47.715129  318322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 11:28:47.724032  318322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0908 11:28:47.733123  318322 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:28:47.733183  318322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 11:28:47.741742  318322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0908 11:28:47.750807  318322 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:28:47.750864  318322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 11:28:47.759825  318322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 11:28:47.768995  318322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:28:47.819177  318322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:28:50.270888  318322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.451686272s)
	I0908 11:28:50.270905  318322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:28:50.464154  318322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:28:50.533596  318322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:28:50.598553  318322 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:28:50.598627  318322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:28:51.098984  318322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:28:51.599093  318322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:28:51.624699  318322 api_server.go:72] duration metric: took 1.026161932s to wait for apiserver process to appear ...
	I0908 11:28:51.624713  318322 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:28:51.624733  318322 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 11:28:54.999076  318322 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:28:54.999094  318322 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:28:54.999106  318322 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 11:28:55.036864  318322 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 11:28:55.036885  318322 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 11:28:55.125045  318322 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 11:28:55.144388  318322 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:28:55.144413  318322 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:28:55.624850  318322 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 11:28:55.635041  318322 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:28:55.635058  318322 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:28:56.125328  318322 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 11:28:56.160628  318322 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 11:28:56.160652  318322 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 11:28:56.624859  318322 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 11:28:56.632987  318322 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0908 11:28:56.647106  318322 api_server.go:141] control plane version: v1.34.0
	I0908 11:28:56.647127  318322 api_server.go:131] duration metric: took 5.022403593s to wait for apiserver health ...
	I0908 11:28:56.647135  318322 cni.go:84] Creating CNI manager for ""
	I0908 11:28:56.647140  318322 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:28:56.651460  318322 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 11:28:56.654433  318322 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 11:28:56.658410  318322 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 11:28:56.658420  318322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 11:28:56.677270  318322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 11:28:57.122813  318322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:28:57.126050  318322 system_pods.go:59] 8 kube-system pods found
	I0908 11:28:57.126070  318322 system_pods.go:61] "coredns-66bc5c9577-67nz9" [8435cecc-839a-46c1-a76a-706831c8627d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:28:57.126078  318322 system_pods.go:61] "etcd-functional-594147" [f594ea6e-f0a6-40df-9f5b-970bbf7e9de7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:28:57.126083  318322 system_pods.go:61] "kindnet-k58dk" [da1e9838-fcc3-4604-9f50-0556868f34ba] Running
	I0908 11:28:57.126089  318322 system_pods.go:61] "kube-apiserver-functional-594147" [f6fb91cf-66ab-45a5-920d-dc84467759ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:28:57.126095  318322 system_pods.go:61] "kube-controller-manager-functional-594147" [30eeb9e5-5271-4f58-bab6-7f0b9b6bb54e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:28:57.126100  318322 system_pods.go:61] "kube-proxy-rbn7h" [f6232c8f-0fe1-4d78-a3c2-d0342474ea33] Running
	I0908 11:28:57.126105  318322 system_pods.go:61] "kube-scheduler-functional-594147" [5149a2bb-4078-4e2b-946a-8a3ff7f83d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:28:57.126110  318322 system_pods.go:61] "storage-provisioner" [265f5f0f-6ea1-4e42-af86-f48e3f73fd9d] Running
	I0908 11:28:57.126115  318322 system_pods.go:74] duration metric: took 3.29255ms to wait for pod list to return data ...
	I0908 11:28:57.126121  318322 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:28:57.128845  318322 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 11:28:57.128864  318322 node_conditions.go:123] node cpu capacity is 2
	I0908 11:28:57.128875  318322 node_conditions.go:105] duration metric: took 2.750037ms to run NodePressure ...
	I0908 11:28:57.128891  318322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 11:28:57.397118  318322 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 11:28:57.400512  318322 kubeadm.go:735] kubelet initialised
	I0908 11:28:57.400524  318322 kubeadm.go:736] duration metric: took 3.388418ms waiting for restarted kubelet to initialise ...
	I0908 11:28:57.400539  318322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 11:28:57.408165  318322 ops.go:34] apiserver oom_adj: -16
	I0908 11:28:57.408176  318322 kubeadm.go:593] duration metric: took 9.980975191s to restartPrimaryControlPlane
	I0908 11:28:57.408183  318322 kubeadm.go:394] duration metric: took 10.055483489s to StartCluster
	I0908 11:28:57.408209  318322 settings.go:142] acquiring lock: {Name:mkbde80afcd769206bcbb25bd8990d83418a87bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:28:57.408279  318322 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-293252/kubeconfig
	I0908 11:28:57.408959  318322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/kubeconfig: {Name:mk390277a44357409639aba3926256bcd9fea3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:28:57.409233  318322 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:28:57.409438  318322 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:28:57.409536  318322 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 11:28:57.409598  318322 addons.go:69] Setting storage-provisioner=true in profile "functional-594147"
	I0908 11:28:57.409612  318322 addons.go:238] Setting addon storage-provisioner=true in "functional-594147"
	W0908 11:28:57.409617  318322 addons.go:247] addon storage-provisioner should already be in state true
	I0908 11:28:57.409640  318322 host.go:66] Checking if "functional-594147" exists ...
	I0908 11:28:57.410232  318322 cli_runner.go:164] Run: docker container inspect functional-594147 --format={{.State.Status}}
	I0908 11:28:57.410382  318322 addons.go:69] Setting default-storageclass=true in profile "functional-594147"
	I0908 11:28:57.410396  318322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-594147"
	I0908 11:28:57.410742  318322 cli_runner.go:164] Run: docker container inspect functional-594147 --format={{.State.Status}}
	I0908 11:28:57.414318  318322 out.go:179] * Verifying Kubernetes components...
	I0908 11:28:57.419828  318322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:28:57.442489  318322 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 11:28:57.442686  318322 addons.go:238] Setting addon default-storageclass=true in "functional-594147"
	W0908 11:28:57.442695  318322 addons.go:247] addon default-storageclass should already be in state true
	I0908 11:28:57.442721  318322 host.go:66] Checking if "functional-594147" exists ...
	I0908 11:28:57.443146  318322 cli_runner.go:164] Run: docker container inspect functional-594147 --format={{.State.Status}}
	I0908 11:28:57.445383  318322 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:28:57.445393  318322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 11:28:57.445449  318322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
	I0908 11:28:57.473187  318322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
	I0908 11:28:57.479627  318322 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 11:28:57.479640  318322 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 11:28:57.479703  318322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
	I0908 11:28:57.508836  318322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
	I0908 11:28:57.622193  318322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:28:57.631458  318322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:28:57.645469  318322 node_ready.go:35] waiting up to 6m0s for node "functional-594147" to be "Ready" ...
	I0908 11:28:57.648498  318322 node_ready.go:49] node "functional-594147" is "Ready"
	I0908 11:28:57.648515  318322 node_ready.go:38] duration metric: took 3.026155ms for node "functional-594147" to be "Ready" ...
	I0908 11:28:57.648527  318322 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:28:57.648587  318322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:28:57.657824  318322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:28:58.422737  318322 api_server.go:72] duration metric: took 1.013478716s to wait for apiserver process to appear ...
	I0908 11:28:58.422748  318322 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:28:58.422765  318322 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 11:28:58.433115  318322 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 11:28:58.435003  318322 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0908 11:28:58.436031  318322 api_server.go:141] control plane version: v1.34.0
	I0908 11:28:58.436044  318322 api_server.go:131] duration metric: took 13.291176ms to wait for apiserver health ...
	I0908 11:28:58.436052  318322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:28:58.436077  318322 addons.go:514] duration metric: took 1.026516453s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 11:28:58.439573  318322 system_pods.go:59] 8 kube-system pods found
	I0908 11:28:58.439592  318322 system_pods.go:61] "coredns-66bc5c9577-67nz9" [8435cecc-839a-46c1-a76a-706831c8627d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:28:58.439599  318322 system_pods.go:61] "etcd-functional-594147" [f594ea6e-f0a6-40df-9f5b-970bbf7e9de7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:28:58.439605  318322 system_pods.go:61] "kindnet-k58dk" [da1e9838-fcc3-4604-9f50-0556868f34ba] Running
	I0908 11:28:58.439611  318322 system_pods.go:61] "kube-apiserver-functional-594147" [f6fb91cf-66ab-45a5-920d-dc84467759ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:28:58.439616  318322 system_pods.go:61] "kube-controller-manager-functional-594147" [30eeb9e5-5271-4f58-bab6-7f0b9b6bb54e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:28:58.439621  318322 system_pods.go:61] "kube-proxy-rbn7h" [f6232c8f-0fe1-4d78-a3c2-d0342474ea33] Running
	I0908 11:28:58.439626  318322 system_pods.go:61] "kube-scheduler-functional-594147" [5149a2bb-4078-4e2b-946a-8a3ff7f83d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:28:58.439630  318322 system_pods.go:61] "storage-provisioner" [265f5f0f-6ea1-4e42-af86-f48e3f73fd9d] Running
	I0908 11:28:58.439635  318322 system_pods.go:74] duration metric: took 3.578424ms to wait for pod list to return data ...
	I0908 11:28:58.439642  318322 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:28:58.441920  318322 default_sa.go:45] found service account: "default"
	I0908 11:28:58.441934  318322 default_sa.go:55] duration metric: took 2.287098ms for default service account to be created ...
	I0908 11:28:58.441942  318322 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:28:58.445328  318322 system_pods.go:86] 8 kube-system pods found
	I0908 11:28:58.445346  318322 system_pods.go:89] "coredns-66bc5c9577-67nz9" [8435cecc-839a-46c1-a76a-706831c8627d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:28:58.445354  318322 system_pods.go:89] "etcd-functional-594147" [f594ea6e-f0a6-40df-9f5b-970bbf7e9de7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 11:28:58.445363  318322 system_pods.go:89] "kindnet-k58dk" [da1e9838-fcc3-4604-9f50-0556868f34ba] Running
	I0908 11:28:58.445369  318322 system_pods.go:89] "kube-apiserver-functional-594147" [f6fb91cf-66ab-45a5-920d-dc84467759ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 11:28:58.445375  318322 system_pods.go:89] "kube-controller-manager-functional-594147" [30eeb9e5-5271-4f58-bab6-7f0b9b6bb54e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 11:28:58.445379  318322 system_pods.go:89] "kube-proxy-rbn7h" [f6232c8f-0fe1-4d78-a3c2-d0342474ea33] Running
	I0908 11:28:58.445384  318322 system_pods.go:89] "kube-scheduler-functional-594147" [5149a2bb-4078-4e2b-946a-8a3ff7f83d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 11:28:58.445387  318322 system_pods.go:89] "storage-provisioner" [265f5f0f-6ea1-4e42-af86-f48e3f73fd9d] Running
	I0908 11:28:58.445393  318322 system_pods.go:126] duration metric: took 3.44679ms to wait for k8s-apps to be running ...
	I0908 11:28:58.445399  318322 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:28:58.445457  318322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:28:58.458404  318322 system_svc.go:56] duration metric: took 12.987957ms WaitForService to wait for kubelet
	I0908 11:28:58.458423  318322 kubeadm.go:578] duration metric: took 1.049169309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:28:58.458440  318322 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:28:58.461126  318322 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 11:28:58.461153  318322 node_conditions.go:123] node cpu capacity is 2
	I0908 11:28:58.461162  318322 node_conditions.go:105] duration metric: took 2.718775ms to run NodePressure ...
	I0908 11:28:58.461174  318322 start.go:241] waiting for startup goroutines ...
	I0908 11:28:58.461180  318322 start.go:246] waiting for cluster config update ...
	I0908 11:28:58.461190  318322 start.go:255] writing updated cluster config ...
	I0908 11:28:58.461528  318322 ssh_runner.go:195] Run: rm -f paused
	I0908 11:28:58.465160  318322 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:28:58.468681  318322 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-67nz9" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 11:29:00.475988  318322 pod_ready.go:104] pod "coredns-66bc5c9577-67nz9" is not "Ready", error: <nil>
	W0908 11:29:02.974535  318322 pod_ready.go:104] pod "coredns-66bc5c9577-67nz9" is not "Ready", error: <nil>
	W0908 11:29:04.974910  318322 pod_ready.go:104] pod "coredns-66bc5c9577-67nz9" is not "Ready", error: <nil>
	I0908 11:29:05.474384  318322 pod_ready.go:94] pod "coredns-66bc5c9577-67nz9" is "Ready"
	I0908 11:29:05.474398  318322 pod_ready.go:86] duration metric: took 7.005705036s for pod "coredns-66bc5c9577-67nz9" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:29:05.477153  318322 pod_ready.go:83] waiting for pod "etcd-functional-594147" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:29:05.483665  318322 pod_ready.go:94] pod "etcd-functional-594147" is "Ready"
	I0908 11:29:05.483680  318322 pod_ready.go:86] duration metric: took 6.514258ms for pod "etcd-functional-594147" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:29:05.486099  318322 pod_ready.go:83] waiting for pod "kube-apiserver-functional-594147" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:29:07.491644  318322 pod_ready.go:94] pod "kube-apiserver-functional-594147" is "Ready"
	I0908 11:29:07.491658  318322 pod_ready.go:86] duration metric: took 2.005547057s for pod "kube-apiserver-functional-594147" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:29:07.494319  318322 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-594147" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:29:07.499069  318322 pod_ready.go:94] pod "kube-controller-manager-functional-594147" is "Ready"
	I0908 11:29:07.499084  318322 pod_ready.go:86] duration metric: took 4.752582ms for pod "kube-controller-manager-functional-594147" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:29:07.501627  318322 pod_ready.go:83] waiting for pod "kube-proxy-rbn7h" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:29:07.872780  318322 pod_ready.go:94] pod "kube-proxy-rbn7h" is "Ready"
	I0908 11:29:07.872795  318322 pod_ready.go:86] duration metric: took 371.155277ms for pod "kube-proxy-rbn7h" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:29:08.073201  318322 pod_ready.go:83] waiting for pod "kube-scheduler-functional-594147" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:29:08.472326  318322 pod_ready.go:94] pod "kube-scheduler-functional-594147" is "Ready"
	I0908 11:29:08.472340  318322 pod_ready.go:86] duration metric: took 399.126065ms for pod "kube-scheduler-functional-594147" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:29:08.472351  318322 pod_ready.go:40] duration metric: took 10.007171013s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:29:08.528681  318322 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:29:08.531572  318322 out.go:179] * Done! kubectl is now configured to use "functional-594147" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 11:29:50 functional-594147 crio[4196]: time="2025-09-08 11:29:50.580633010Z" level=info msg="Stopping pod sandbox: a5d06eeedf3e350e6b68b30c596c42b93b06a3f0fc2f2e986801ea4019e97568" id=2f2fe23e-b34d-4feb-b0c7-be1eb9107c4c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 11:29:50 functional-594147 crio[4196]: time="2025-09-08 11:29:50.580681723Z" level=info msg="Stopped pod sandbox (already stopped): a5d06eeedf3e350e6b68b30c596c42b93b06a3f0fc2f2e986801ea4019e97568" id=2f2fe23e-b34d-4feb-b0c7-be1eb9107c4c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 11:29:50 functional-594147 crio[4196]: time="2025-09-08 11:29:50.581023565Z" level=info msg="Removing pod sandbox: a5d06eeedf3e350e6b68b30c596c42b93b06a3f0fc2f2e986801ea4019e97568" id=ba4a6bf4-f578-4961-a643-161741dfea3e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:29:50 functional-594147 crio[4196]: time="2025-09-08 11:29:50.591219382Z" level=info msg="Removed pod sandbox: a5d06eeedf3e350e6b68b30c596c42b93b06a3f0fc2f2e986801ea4019e97568" id=ba4a6bf4-f578-4961-a643-161741dfea3e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:29:50 functional-594147 crio[4196]: time="2025-09-08 11:29:50.591724578Z" level=info msg="Stopping pod sandbox: 400310bfccfe47ba489d5ee5590aa1c25452110e2b80445a65012dc23524a566" id=b1487d24-cd0f-4873-bccc-1a79589e1204 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 11:29:50 functional-594147 crio[4196]: time="2025-09-08 11:29:50.591764570Z" level=info msg="Stopped pod sandbox (already stopped): 400310bfccfe47ba489d5ee5590aa1c25452110e2b80445a65012dc23524a566" id=b1487d24-cd0f-4873-bccc-1a79589e1204 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 11:29:50 functional-594147 crio[4196]: time="2025-09-08 11:29:50.592336171Z" level=info msg="Removing pod sandbox: 400310bfccfe47ba489d5ee5590aa1c25452110e2b80445a65012dc23524a566" id=ab5b87e8-eb5d-48f9-8106-f764e11d4bb1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:29:50 functional-594147 crio[4196]: time="2025-09-08 11:29:50.602492250Z" level=info msg="Removed pod sandbox: 400310bfccfe47ba489d5ee5590aa1c25452110e2b80445a65012dc23524a566" id=ab5b87e8-eb5d-48f9-8106-f764e11d4bb1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:29:52 functional-594147 crio[4196]: time="2025-09-08 11:29:52.662734286Z" level=info msg="Running pod sandbox: default/hello-node-75c85bcc94-jcqzc/POD" id=2c727c4b-4895-4576-969c-2673aee9b7bb name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 11:29:52 functional-594147 crio[4196]: time="2025-09-08 11:29:52.662794823Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 11:29:52 functional-594147 crio[4196]: time="2025-09-08 11:29:52.684499062Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-jcqzc Namespace:default ID:7110c8b414761c3b3fb4ad77faafb22ccff0a5d797d3e60281a738ba0d491a9d UID:4b12811a-9e17-43ed-a00d-4723fd85041b NetNS:/var/run/netns/d1a90d6b-64cf-43f0-82f3-9e35b63c8cb5 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 11:29:52 functional-594147 crio[4196]: time="2025-09-08 11:29:52.684539915Z" level=info msg="Adding pod default_hello-node-75c85bcc94-jcqzc to CNI network \"kindnet\" (type=ptp)"
	Sep 08 11:29:52 functional-594147 crio[4196]: time="2025-09-08 11:29:52.693123953Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-jcqzc Namespace:default ID:7110c8b414761c3b3fb4ad77faafb22ccff0a5d797d3e60281a738ba0d491a9d UID:4b12811a-9e17-43ed-a00d-4723fd85041b NetNS:/var/run/netns/d1a90d6b-64cf-43f0-82f3-9e35b63c8cb5 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 11:29:52 functional-594147 crio[4196]: time="2025-09-08 11:29:52.693275124Z" level=info msg="Checking pod default_hello-node-75c85bcc94-jcqzc for CNI network kindnet (type=ptp)"
	Sep 08 11:29:52 functional-594147 crio[4196]: time="2025-09-08 11:29:52.696877644Z" level=info msg="Ran pod sandbox 7110c8b414761c3b3fb4ad77faafb22ccff0a5d797d3e60281a738ba0d491a9d with infra container: default/hello-node-75c85bcc94-jcqzc/POD" id=2c727c4b-4895-4576-969c-2673aee9b7bb name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 11:29:52 functional-594147 crio[4196]: time="2025-09-08 11:29:52.698283581Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=28e143bb-0366-4f3d-bd77-839012d21623 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:30:07 functional-594147 crio[4196]: time="2025-09-08 11:30:07.637000566Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d7261527-6f24-4aa5-b248-a8770e4da990 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:30:13 functional-594147 crio[4196]: time="2025-09-08 11:30:13.637165798Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bd919388-4ac2-454d-9252-1fa9f5bb72d2 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:30:29 functional-594147 crio[4196]: time="2025-09-08 11:30:29.636366016Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d07888dc-fb93-4c39-a73e-3323c8dcf4bb name=/runtime.v1.ImageService/PullImage
	Sep 08 11:30:56 functional-594147 crio[4196]: time="2025-09-08 11:30:56.637594543Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=97a34570-ae80-4144-9429-4e0cf6139eff name=/runtime.v1.ImageService/PullImage
	Sep 08 11:31:14 functional-594147 crio[4196]: time="2025-09-08 11:31:14.637070828Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=533b091c-91b1-4d54-817c-570d74b233f0 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:32:20 functional-594147 crio[4196]: time="2025-09-08 11:32:20.637371879Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=11841683-c6c2-491c-90f4-bf5aee7c1a1a name=/runtime.v1.ImageService/PullImage
	Sep 08 11:32:40 functional-594147 crio[4196]: time="2025-09-08 11:32:40.636881746Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bcdecaf9-093d-4103-9fed-bce6d6e333c3 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:35:03 functional-594147 crio[4196]: time="2025-09-08 11:35:03.636758898Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=99e749d0-f6ec-43e1-ba25-20eeb6664489 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:35:23 functional-594147 crio[4196]: time="2025-09-08 11:35:23.637095212Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f1bac235-8779-4dc7-a62a-93801bece81a name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d593d31641a1b       docker.io/library/nginx@sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708   9 minutes ago       Running             myfrontend                0                   cda2fd1e1346b       sp-pod
	0303d590fdc2c       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8   10 minutes ago      Running             nginx                     0                   a63df75a43296       nginx-svc
	36383c821be06       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   4bea0990f70f4       coredns-66bc5c9577-67nz9
	18533c45ba6f2       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                  10 minutes ago      Running             kube-proxy                2                   6ab4a2aaf436b       kube-proxy-rbn7h
	b0e7936859c84       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   0a83f72c8ce43       storage-provisioner
	e9d6be6058de6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   1115b1d7af1b2       kindnet-k58dk
	299bd2707b4c0       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                  10 minutes ago      Running             kube-apiserver            0                   0f04179c4937a       kube-apiserver-functional-594147
	43ee6dbd3157d       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                  10 minutes ago      Running             kube-controller-manager   2                   2e906b05a3f64       kube-controller-manager-functional-594147
	3e293e7e1ba03       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                  10 minutes ago      Running             kube-scheduler            2                   0a7dfefeea904       kube-scheduler-functional-594147
	206a490195ada       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   5980b15ac44f8       etcd-functional-594147
	7e1de6d080e18       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                  11 minutes ago      Exited              kube-proxy                1                   6ab4a2aaf436b       kube-proxy-rbn7h
	e0392ab8b6d82       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   0a83f72c8ce43       storage-provisioner
	0af89f785285b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   5980b15ac44f8       etcd-functional-594147
	3f830952b8e0d       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                  11 minutes ago      Exited              kube-controller-manager   1                   2e906b05a3f64       kube-controller-manager-functional-594147
	8b50fa1e8e0d8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   1115b1d7af1b2       kindnet-k58dk
	5fd50f10cabe0       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                  11 minutes ago      Exited              kube-scheduler            1                   0a7dfefeea904       kube-scheduler-functional-594147
	b6baa2e9d421d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   4bea0990f70f4       coredns-66bc5c9577-67nz9
	
	
	==> coredns [36383c821be06569c119bd97410abb41752bbcaeb66729f2ada08c6256999729] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52362 - 1663 "HINFO IN 4846866759372009309.7685507940895629716. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026878621s
	
	
	==> coredns [b6baa2e9d421de35e380188c46f5c9d0418056a3e3310d5e74242efa5c9ba26e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40768 - 11571 "HINFO IN 3188158541595959460.5701297558946287370. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019099079s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-594147
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-594147
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=functional-594147
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_27_07_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:27:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-594147
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:39:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:34:41 +0000   Mon, 08 Sep 2025 11:27:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:34:41 +0000   Mon, 08 Sep 2025 11:27:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:34:41 +0000   Mon, 08 Sep 2025 11:27:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:34:41 +0000   Mon, 08 Sep 2025 11:27:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-594147
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 237de9b7bd2648c69c0b0e7f39e02005
	  System UUID:                17e31c17-b5cf-488c-bc8e-9d2ab77543d7
	  Boot ID:                    96333a60-ea75-4725-84ac-97579709a820
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-jcqzc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  default                     hello-node-connect-7d85dfc575-jjn4t          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  kube-system                 coredns-66bc5c9577-67nz9                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-594147                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-k58dk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-594147             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-594147    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-rbn7h                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-594147             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-594147 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-594147 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-594147 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-594147 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-594147 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-594147 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                node-controller  Node functional-594147 event: Registered Node functional-594147 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-594147 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-594147 event: Registered Node functional-594147 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-594147 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-594147 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-594147 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-594147 event: Registered Node functional-594147 in Controller
	
	
	==> dmesg <==
	[Sep 8 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.013821] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503648] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033978] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.739980] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.579788] kauditd_printk_skb: 36 callbacks suppressed
	[Sep 8 11:17] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [0af89f785285b5e49b161034a4f9d66bb2ceaf90e5d9ae982e81e652326b40d5] <==
	{"level":"warn","ts":"2025-09-08T11:28:09.421416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:09.439399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:09.463501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:09.491594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:09.514742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:09.530571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:09.583659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38372","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:28:35.593888Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T11:28:35.593937Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-594147","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-08T11:28:35.594027Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:28:35.734966Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:28:35.735048Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:28:35.735069Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-08T11:28:35.735127Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-08T11:28:35.735187Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:28:35.735225Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:28:35.735235Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:28:35.735253Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-08T11:28:35.735340Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:28:35.735382Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:28:35.735417Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:28:35.739089Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-08T11:28:35.739174Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:28:35.739251Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-08T11:28:35.739280Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-594147","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [206a490195ada71321146c9526cc6dda96dba06212548858afb978269a8aa36b] <==
	{"level":"warn","ts":"2025-09-08T11:28:53.556994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.579644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.603640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.618567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.631730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.659176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.673389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.698687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.712513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.735732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.758202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.772731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.804062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.814456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.835004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.846437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.868222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.926508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.958231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.972788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:53.995495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:28:54.068726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51776","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:38:52.739746Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1125}
	{"level":"info","ts":"2025-09-08T11:38:52.763214Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1125,"took":"23.130271ms","hash":3935147055,"current-db-size-bytes":3223552,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1384448,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-08T11:38:52.763273Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3935147055,"revision":1125,"compact-revision":-1}
	
	
	==> kernel <==
	 11:39:35 up  1:22,  0 users,  load average: 0.09, 0.32, 1.27
	Linux functional-594147 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [8b50fa1e8e0d8b5cf723971b02055ff490cc6f7aedad372f51777423cde959a5] <==
	I0908 11:28:06.756532       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 11:28:06.770000       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0908 11:28:06.770161       1 main.go:148] setting mtu 1500 for CNI 
	I0908 11:28:06.770222       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 11:28:06.770245       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T11:28:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 11:28:07.035874       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 11:28:07.035902       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 11:28:07.035912       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 11:28:07.036423       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0908 11:28:10.638208       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 11:28:10.638250       1 metrics.go:72] Registering metrics
	I0908 11:28:10.638322       1 controller.go:711] "Syncing nftables rules"
	I0908 11:28:17.035550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:28:17.035632       1 main.go:301] handling current node
	I0908 11:28:27.035626       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:28:27.035661       1 main.go:301] handling current node
	
	
	==> kindnet [e9d6be6058de698e1e0bd5e5689a9a5c637c3065c35e0b5031a1d7a7d431cbb8] <==
	I0908 11:37:26.341489       1 main.go:301] handling current node
	I0908 11:37:36.337867       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:37:36.337903       1 main.go:301] handling current node
	I0908 11:37:46.332878       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:37:46.332914       1 main.go:301] handling current node
	I0908 11:37:56.335292       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:37:56.335433       1 main.go:301] handling current node
	I0908 11:38:06.333590       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:06.333622       1 main.go:301] handling current node
	I0908 11:38:16.333566       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:16.333602       1 main.go:301] handling current node
	I0908 11:38:26.341406       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:26.341443       1 main.go:301] handling current node
	I0908 11:38:36.337360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:36.337398       1 main.go:301] handling current node
	I0908 11:38:46.333851       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:46.333891       1 main.go:301] handling current node
	I0908 11:38:56.332878       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:56.332998       1 main.go:301] handling current node
	I0908 11:39:06.333199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:39:06.333242       1 main.go:301] handling current node
	I0908 11:39:16.332977       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:39:16.333011       1 main.go:301] handling current node
	I0908 11:39:26.332921       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:39:26.332963       1 main.go:301] handling current node
	
	
	==> kube-apiserver [299bd2707b4c0e9f6389e4f677a865d3a57516789f71ae74773aee66d7416219] <==
	I0908 11:28:58.583071       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 11:28:58.931062       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0908 11:28:58.981829       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 11:29:12.462078       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.37.94"}
	I0908 11:29:23.079219       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.8.149"}
	I0908 11:29:32.850256       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.196.37"}
	E0908 11:29:52.234815       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:44402: use of closed network connection
	I0908 11:29:52.438256       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.114.115"}
	I0908 11:29:59.072376       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:30:09.641833       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:31:09.052308       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:31:12.806830       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:32:32.275023       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:32:41.353855       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:33:41.826934       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:33:45.840770       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:35:00.024198       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:35:07.681476       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:36:16.316734       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:36:19.998531       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:37:24.557714       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:37:31.989475       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:38:47.356633       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:38:55.061919       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 11:38:58.457521       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [3f830952b8e0daabc98355866ad4b4d5f9b39d682a6894a4392c507aa1bf2a82] <==
	I0908 11:28:13.884572       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 11:28:13.886864       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0908 11:28:13.893084       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 11:28:13.899370       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 11:28:13.902224       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 11:28:13.912617       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:28:13.919904       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 11:28:13.920001       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 11:28:13.920207       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 11:28:13.920231       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 11:28:13.920253       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 11:28:13.921862       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 11:28:13.921944       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 11:28:13.922939       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 11:28:13.927919       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 11:28:13.937913       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0908 11:28:13.937971       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0908 11:28:13.938007       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 11:28:13.938012       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0908 11:28:13.938017       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0908 11:28:13.938107       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0908 11:28:13.938192       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 11:28:13.938282       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-594147"
	I0908 11:28:13.938329       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 11:28:13.938375       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-controller-manager [43ee6dbd3157decc5c929e385127f462919d5c45894ba07170f3954cf7dc916e] <==
	I0908 11:28:58.574957       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 11:28:58.577913       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0908 11:28:58.578116       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:28:58.578164       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 11:28:58.580573       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 11:28:58.581764       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0908 11:28:58.583080       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 11:28:58.583156       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:28:58.583179       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0908 11:28:58.583217       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0908 11:28:58.583289       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0908 11:28:58.584467       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 11:28:58.585968       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 11:28:58.587884       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 11:28:58.590191       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 11:28:58.593475       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0908 11:28:58.594698       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 11:28:58.599993       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 11:28:58.605683       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:28:58.607861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:28:58.607886       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 11:28:58.607895       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 11:28:58.614484       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 11:28:58.622585       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 11:28:58.624785       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [18533c45ba6f2f8c425d77f8e604d73286f88d890d67c26a2d20d50c773e8de9] <==
	I0908 11:28:56.198112       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:28:56.297141       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:28:56.402816       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:28:56.402868       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 11:28:56.402945       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:28:56.423069       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 11:28:56.423188       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:28:56.432258       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:28:56.439961       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:28:56.439993       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:28:56.441890       1 config.go:200] "Starting service config controller"
	I0908 11:28:56.441910       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:28:56.442229       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:28:56.442237       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:28:56.442254       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:28:56.442259       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:28:56.442955       1 config.go:309] "Starting node config controller"
	I0908 11:28:56.442974       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:28:56.442984       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:28:56.542950       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:28:56.542953       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:28:56.542977       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [7e1de6d080e187db12adbb8fb32a87db77aeb16981890669063a999a3a3c1be9] <==
	I0908 11:28:09.200902       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:28:09.465931       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:28:10.773498       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:28:10.779635       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 11:28:10.779866       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:28:10.915476       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 11:28:10.915600       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:28:10.977323       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:28:10.977715       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:28:10.977740       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:28:10.979468       1 config.go:200] "Starting service config controller"
	I0908 11:28:10.979489       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:28:10.979508       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:28:10.979520       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:28:10.979534       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:28:10.979538       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:28:10.984943       1 config.go:309] "Starting node config controller"
	I0908 11:28:10.984969       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:28:10.984977       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:28:11.092642       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:28:11.188372       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:28:11.188416       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3e293e7e1ba037b639ec509d177a2ef765123e9788f0dd39cc43233eec9feca9] <==
	I0908 11:28:53.867996       1 serving.go:386] Generated self-signed cert in-memory
	I0908 11:28:55.561800       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:28:55.561833       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:28:55.571984       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 11:28:55.572310       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 11:28:55.572387       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 11:28:55.572476       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:28:55.575385       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:28:55.575477       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:28:55.576874       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:28:55.605855       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:28:55.673668       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 11:28:55.675944       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:28:55.711837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [5fd50f10cabe0853e3b30126f8f971d10e648428e9bdaafe49d28056ddd7f6ec] <==
	I0908 11:28:09.619516       1 serving.go:386] Generated self-signed cert in-memory
	I0908 11:28:11.682279       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:28:11.682314       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:28:11.687539       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 11:28:11.687621       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 11:28:11.687644       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 11:28:11.687677       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:28:11.690264       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:28:11.690290       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:28:11.690324       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:28:11.690330       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:28:11.788303       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 11:28:11.790688       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:28:11.790702       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:28:35.602022       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:28:35.602418       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0908 11:28:35.602479       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0908 11:28:35.602526       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0908 11:28:35.602610       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:28:35.602706       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0908 11:28:35.602766       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 08 11:38:50 functional-594147 kubelet[4487]: E0908 11:38:50.707899    4487 manager.go:1116] Failed to create existing container: /docker/72c1b9678509d2fb61dbcb1b2042e0e3510514e5ff804b66fda8db4e9709f51e/crio-0a83f72c8ce43634898a64df24c04a2993284de501de23b3c1c6b857cb75cd91: Error finding container 0a83f72c8ce43634898a64df24c04a2993284de501de23b3c1c6b857cb75cd91: Status 404 returned error can't find the container with id 0a83f72c8ce43634898a64df24c04a2993284de501de23b3c1c6b857cb75cd91
	Sep 08 11:38:50 functional-594147 kubelet[4487]: E0908 11:38:50.708073    4487 manager.go:1116] Failed to create existing container: /crio-400310bfccfe47ba489d5ee5590aa1c25452110e2b80445a65012dc23524a566: Error finding container 400310bfccfe47ba489d5ee5590aa1c25452110e2b80445a65012dc23524a566: Status 404 returned error can't find the container with id 400310bfccfe47ba489d5ee5590aa1c25452110e2b80445a65012dc23524a566
	Sep 08 11:38:50 functional-594147 kubelet[4487]: E0908 11:38:50.708248    4487 manager.go:1116] Failed to create existing container: /crio-1115b1d7af1b25f740fa4001913e44a5b2fe9390f8dd405439f6475f03beb3c1: Error finding container 1115b1d7af1b25f740fa4001913e44a5b2fe9390f8dd405439f6475f03beb3c1: Status 404 returned error can't find the container with id 1115b1d7af1b25f740fa4001913e44a5b2fe9390f8dd405439f6475f03beb3c1
	Sep 08 11:38:50 functional-594147 kubelet[4487]: E0908 11:38:50.708431    4487 manager.go:1116] Failed to create existing container: /docker/72c1b9678509d2fb61dbcb1b2042e0e3510514e5ff804b66fda8db4e9709f51e/crio-5980b15ac44f8d369a7c42ca8d496c2f5db6813ff72c3617494db0d042cc1629: Error finding container 5980b15ac44f8d369a7c42ca8d496c2f5db6813ff72c3617494db0d042cc1629: Status 404 returned error can't find the container with id 5980b15ac44f8d369a7c42ca8d496c2f5db6813ff72c3617494db0d042cc1629
	Sep 08 11:38:50 functional-594147 kubelet[4487]: E0908 11:38:50.708612    4487 manager.go:1116] Failed to create existing container: /docker/72c1b9678509d2fb61dbcb1b2042e0e3510514e5ff804b66fda8db4e9709f51e/crio-2e906b05a3f64efa1e99852738437395a69714e9413a27f8cdee8957d6fd6f7a: Error finding container 2e906b05a3f64efa1e99852738437395a69714e9413a27f8cdee8957d6fd6f7a: Status 404 returned error can't find the container with id 2e906b05a3f64efa1e99852738437395a69714e9413a27f8cdee8957d6fd6f7a
	Sep 08 11:38:50 functional-594147 kubelet[4487]: E0908 11:38:50.708848    4487 manager.go:1116] Failed to create existing container: /crio-2e906b05a3f64efa1e99852738437395a69714e9413a27f8cdee8957d6fd6f7a: Error finding container 2e906b05a3f64efa1e99852738437395a69714e9413a27f8cdee8957d6fd6f7a: Status 404 returned error can't find the container with id 2e906b05a3f64efa1e99852738437395a69714e9413a27f8cdee8957d6fd6f7a
	Sep 08 11:38:50 functional-594147 kubelet[4487]: E0908 11:38:50.709069    4487 manager.go:1116] Failed to create existing container: /crio-4bea0990f70f44c0db7f96af2937efd86cfa0245d418782238c4f0cc47f7a88a: Error finding container 4bea0990f70f44c0db7f96af2937efd86cfa0245d418782238c4f0cc47f7a88a: Status 404 returned error can't find the container with id 4bea0990f70f44c0db7f96af2937efd86cfa0245d418782238c4f0cc47f7a88a
	Sep 08 11:38:50 functional-594147 kubelet[4487]: E0908 11:38:50.834147    4487 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331530833851681 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:226531} inodes_used:{value:94}}"
	Sep 08 11:38:50 functional-594147 kubelet[4487]: E0908 11:38:50.834183    4487 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331530833851681 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:226531} inodes_used:{value:94}}"
	Sep 08 11:38:51 functional-594147 kubelet[4487]: E0908 11:38:51.635950    4487 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-jjn4t" podUID="296a60c6-bfbb-4a5c-a218-b0b66133ebaa"
	Sep 08 11:38:51 functional-594147 kubelet[4487]: E0908 11:38:51.636000    4487 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-jcqzc" podUID="4b12811a-9e17-43ed-a00d-4723fd85041b"
	Sep 08 11:39:00 functional-594147 kubelet[4487]: E0908 11:39:00.835602    4487 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331540835298769 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:226531} inodes_used:{value:94}}"
	Sep 08 11:39:00 functional-594147 kubelet[4487]: E0908 11:39:00.835642    4487 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331540835298769 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:226531} inodes_used:{value:94}}"
	Sep 08 11:39:02 functional-594147 kubelet[4487]: E0908 11:39:02.636921    4487 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-jjn4t" podUID="296a60c6-bfbb-4a5c-a218-b0b66133ebaa"
	Sep 08 11:39:05 functional-594147 kubelet[4487]: E0908 11:39:05.636535    4487 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-jcqzc" podUID="4b12811a-9e17-43ed-a00d-4723fd85041b"
	Sep 08 11:39:10 functional-594147 kubelet[4487]: E0908 11:39:10.837591    4487 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331550837263049 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:226531} inodes_used:{value:94}}"
	Sep 08 11:39:10 functional-594147 kubelet[4487]: E0908 11:39:10.837627    4487 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331550837263049 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:226531} inodes_used:{value:94}}"
	Sep 08 11:39:13 functional-594147 kubelet[4487]: E0908 11:39:13.636483    4487 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-jjn4t" podUID="296a60c6-bfbb-4a5c-a218-b0b66133ebaa"
	Sep 08 11:39:19 functional-594147 kubelet[4487]: E0908 11:39:19.635776    4487 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-jcqzc" podUID="4b12811a-9e17-43ed-a00d-4723fd85041b"
	Sep 08 11:39:20 functional-594147 kubelet[4487]: E0908 11:39:20.839729    4487 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331560839462434 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:226531} inodes_used:{value:94}}"
	Sep 08 11:39:20 functional-594147 kubelet[4487]: E0908 11:39:20.839764    4487 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331560839462434 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:226531} inodes_used:{value:94}}"
	Sep 08 11:39:26 functional-594147 kubelet[4487]: E0908 11:39:26.636450    4487 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-jjn4t" podUID="296a60c6-bfbb-4a5c-a218-b0b66133ebaa"
	Sep 08 11:39:30 functional-594147 kubelet[4487]: E0908 11:39:30.841063    4487 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331570840822120 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:226531} inodes_used:{value:94}}"
	Sep 08 11:39:30 functional-594147 kubelet[4487]: E0908 11:39:30.841103    4487 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331570840822120 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:226531} inodes_used:{value:94}}"
	Sep 08 11:39:34 functional-594147 kubelet[4487]: E0908 11:39:34.635835    4487 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-jcqzc" podUID="4b12811a-9e17-43ed-a00d-4723fd85041b"
	
	
	==> storage-provisioner [b0e7936859c84558a78e6a85075bd8cee3b127303eb64610fb3e7a4ef0ad7d71] <==
	W0908 11:39:10.554553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:12.557661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:12.562498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:14.565219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:14.571827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:16.575237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:16.579725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:18.582622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:18.589327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:20.592333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:20.596749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:22.599520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:22.606160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:24.608947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:24.613742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:26.616746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:26.621067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:28.624084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:28.629273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:30.633191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:30.640245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:32.642866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:32.647653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:34.650923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:39:34.660424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e0392ab8b6d822769d54180561cabd0c50e36cda6f41b4e2056fb5fcfa9e252a] <==
	I0908 11:28:08.262056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 11:28:10.667705       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 11:28:10.667927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 11:28:10.671017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:14.126520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:18.387095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:21.985359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:25.039238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:28.066030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:28.072978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 11:28:28.073145       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 11:28:28.073311       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-594147_b79761db-6ddf-4999-b4c3-cd06d20e71ad!
	I0908 11:28:28.074040       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dda2afb2-dfe6-4c92-906e-5a17d9aa4e53", APIVersion:"v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-594147_b79761db-6ddf-4999-b4c3-cd06d20e71ad became leader
	W0908 11:28:28.079762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:28.084290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 11:28:28.173438       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-594147_b79761db-6ddf-4999-b4c3-cd06d20e71ad!
	W0908 11:28:30.089612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:30.099576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:32.103131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:32.109744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:34.113389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:28:34.123682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-594147 -n functional-594147
helpers_test.go:269: (dbg) Run:  kubectl --context functional-594147 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-jcqzc hello-node-connect-7d85dfc575-jjn4t
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-594147 describe pod hello-node-75c85bcc94-jcqzc hello-node-connect-7d85dfc575-jjn4t
helpers_test.go:290: (dbg) kubectl --context functional-594147 describe pod hello-node-75c85bcc94-jcqzc hello-node-connect-7d85dfc575-jjn4t:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-jcqzc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-594147/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:29:52 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fmw6w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fmw6w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m44s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-jcqzc to functional-594147
	  Normal   Pulling    6m56s (x5 over 9m44s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m56s (x5 over 9m44s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m56s (x5 over 9m44s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m42s (x21 over 9m44s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m42s (x21 over 9m44s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-jjn4t
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-594147/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:29:32 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ght5b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ght5b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-jjn4t to functional-594147
	  Warning  Failed     7m16s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m16s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m59s (x19 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m47s (x20 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Normal   Pulling    4m33s (x6 over 10m)   kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (601s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-594147 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-594147 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-jcqzc" [4b12811a-9e17-43ed-a00d-4723fd85041b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0908 11:31:13.603340  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:31:41.313899  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:36:13.602513  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-594147 -n functional-594147
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-08 11:39:52.917193028 +0000 UTC m=+1331.536734290
functional_test.go:1460: (dbg) Run:  kubectl --context functional-594147 describe po hello-node-75c85bcc94-jcqzc -n default
functional_test.go:1460: (dbg) kubectl --context functional-594147 describe po hello-node-75c85bcc94-jcqzc -n default:
Name:             hello-node-75c85bcc94-jcqzc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-594147/192.168.49.2
Start Time:       Mon, 08 Sep 2025 11:29:52 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fmw6w (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fmw6w:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-jcqzc to functional-594147
Normal   Pulling    7m13s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m13s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m13s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m59s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m59s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-594147 logs hello-node-75c85bcc94-jcqzc -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-594147 logs hello-node-75c85bcc94-jcqzc -n default: exit status 1 (149.643141ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-jcqzc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-594147 logs hello-node-75c85bcc94-jcqzc -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (601.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 service --namespace=default --https --url hello-node: exit status 115 (520.954769ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30837
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-594147 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 service hello-node --url --format={{.IP}}: exit status 115 (551.293685ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-594147 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 service hello-node --url: exit status 115 (558.117914ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30837
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-594147 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30837
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    

Test pass (294/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.81
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 5.28
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.17
18 TestDownloadOnly/v1.34.0/DeleteAll 0.36
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.23
21 TestBinaryMirror 0.66
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 197.16
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 10.96
35 TestAddons/parallel/Registry 21.32
36 TestAddons/parallel/RegistryCreds 0.72
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 6.8
41 TestAddons/parallel/CSI 58.92
42 TestAddons/parallel/Headlamp 28.14
43 TestAddons/parallel/CloudSpanner 6.62
44 TestAddons/parallel/LocalPath 52.11
45 TestAddons/parallel/NvidiaDevicePlugin 6.56
46 TestAddons/parallel/Yakd 12.21
48 TestAddons/StoppedEnableDisable 12.22
49 TestCertOptions 32.58
50 TestCertExpiration 252.15
52 TestForceSystemdFlag 45.9
53 TestForceSystemdEnv 36.59
59 TestErrorSpam/setup 34.67
60 TestErrorSpam/start 0.85
61 TestErrorSpam/status 1.16
62 TestErrorSpam/pause 1.79
63 TestErrorSpam/unpause 1.98
64 TestErrorSpam/stop 1.51
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 79.36
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 29.38
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.05
76 TestFunctional/serial/CacheCmd/cache/add_local 1.49
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.05
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
84 TestFunctional/serial/ExtraConfig 34.53
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.78
87 TestFunctional/serial/LogsFileCmd 1.78
88 TestFunctional/serial/InvalidService 4.59
90 TestFunctional/parallel/ConfigCmd 0.5
91 TestFunctional/parallel/DashboardCmd 8.58
92 TestFunctional/parallel/DryRun 0.45
93 TestFunctional/parallel/InternationalLanguage 0.22
94 TestFunctional/parallel/StatusCmd 1.03
99 TestFunctional/parallel/AddonsCmd 0.27
100 TestFunctional/parallel/PersistentVolumeClaim 25.16
102 TestFunctional/parallel/SSHCmd 0.63
103 TestFunctional/parallel/CpCmd 2.02
105 TestFunctional/parallel/FileSync 0.38
106 TestFunctional/parallel/CertSync 2.09
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
114 TestFunctional/parallel/License 0.43
115 TestFunctional/parallel/Version/short 0.41
116 TestFunctional/parallel/Version/components 1.69
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
122 TestFunctional/parallel/ImageCommands/Setup 0.67
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.64
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.63
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.45
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.58
135 TestFunctional/parallel/ImageCommands/ImageRemove 1.04
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.29
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
146 TestFunctional/parallel/ProfileCmd/profile_list 0.43
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
148 TestFunctional/parallel/MountCmd/any-port 9.05
149 TestFunctional/parallel/MountCmd/specific-port 1.68
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.31
151 TestFunctional/parallel/ServiceCmd/List 1.4
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.45
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.03
163 TestMultiControlPlane/serial/StartCluster 191.26
164 TestMultiControlPlane/serial/DeployApp 8.93
165 TestMultiControlPlane/serial/PingHostFromPods 1.71
166 TestMultiControlPlane/serial/AddWorkerNode 59.57
167 TestMultiControlPlane/serial/NodeLabels 0.12
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
169 TestMultiControlPlane/serial/CopyFile 19.86
170 TestMultiControlPlane/serial/StopSecondaryNode 12.72
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
172 TestMultiControlPlane/serial/RestartSecondaryNode 30.79
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.22
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 126.28
175 TestMultiControlPlane/serial/DeleteSecondaryNode 12.59
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
177 TestMultiControlPlane/serial/StopCluster 35.83
178 TestMultiControlPlane/serial/RestartCluster 86.11
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
180 TestMultiControlPlane/serial/AddSecondaryNode 81.11
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.99
185 TestJSONOutput/start/Command 80.59
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.78
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.68
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.86
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 40.39
211 TestKicCustomNetwork/use_default_bridge_network 32.58
212 TestKicExistingNetwork 32.32
213 TestKicCustomSubnet 38.18
214 TestKicStaticIP 38.14
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 64.42
219 TestMountStart/serial/StartWithMountFirst 6.6
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.24
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.63
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.2
226 TestMountStart/serial/RestartStopped 7.38
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 135.17
231 TestMultiNode/serial/DeployApp2Nodes 6.83
232 TestMultiNode/serial/PingHostFrom2Pods 2.77
233 TestMultiNode/serial/AddNode 58.43
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.75
236 TestMultiNode/serial/CopyFile 10.21
237 TestMultiNode/serial/StopNode 2.3
238 TestMultiNode/serial/StartAfterStop 7.64
239 TestMultiNode/serial/RestartKeepsNodes 75.72
240 TestMultiNode/serial/DeleteNode 5.55
241 TestMultiNode/serial/StopMultiNode 23.92
242 TestMultiNode/serial/RestartMultiNode 56.61
243 TestMultiNode/serial/ValidateNameConflict 35.08
248 TestPreload 125.42
250 TestScheduledStopUnix 112.09
253 TestInsufficientStorage 13.25
254 TestRunningBinaryUpgrade 64.7
256 TestKubernetesUpgrade 185.17
257 TestMissingContainerUpgrade 110.96
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 45.01
261 TestNoKubernetes/serial/StartWithStopK8s 114.46
262 TestNoKubernetes/serial/Start 10.49
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
264 TestNoKubernetes/serial/ProfileList 6.54
265 TestNoKubernetes/serial/Stop 1.21
266 TestNoKubernetes/serial/StartNoArgs 6.89
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
268 TestStoppedBinaryUpgrade/Setup 0.74
269 TestStoppedBinaryUpgrade/Upgrade 54.35
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.25
279 TestPause/serial/Start 95.67
287 TestNetworkPlugins/group/false 3.87
291 TestPause/serial/SecondStartNoReconfiguration 42.11
292 TestPause/serial/Pause 1.1
293 TestPause/serial/VerifyStatus 0.49
294 TestPause/serial/Unpause 1.08
295 TestPause/serial/PauseAgain 1.39
296 TestPause/serial/DeletePaused 3.16
297 TestPause/serial/VerifyDeletedResources 0.98
299 TestStartStop/group/old-k8s-version/serial/FirstStart 62.06
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.25
302 TestStartStop/group/old-k8s-version/serial/Stop 11.96
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
304 TestStartStop/group/old-k8s-version/serial/SecondStart 55.52
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
308 TestStartStop/group/old-k8s-version/serial/Pause 3.24
310 TestStartStop/group/no-preload/serial/FirstStart 81.66
312 TestStartStop/group/embed-certs/serial/FirstStart 91.38
313 TestStartStop/group/no-preload/serial/DeployApp 10.49
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
315 TestStartStop/group/no-preload/serial/Stop 11.97
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/no-preload/serial/SecondStart 58.23
318 TestStartStop/group/embed-certs/serial/DeployApp 10.52
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
320 TestStartStop/group/embed-certs/serial/Stop 12.25
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
322 TestStartStop/group/embed-certs/serial/SecondStart 49.41
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
326 TestStartStop/group/no-preload/serial/Pause 3.2
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.39
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.13
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
332 TestStartStop/group/embed-certs/serial/Pause 3.98
334 TestStartStop/group/newest-cni/serial/FirstStart 44.85
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
337 TestStartStop/group/newest-cni/serial/Stop 1.26
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
339 TestStartStop/group/newest-cni/serial/SecondStart 18.21
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.49
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
344 TestStartStop/group/newest-cni/serial/Pause 3.25
345 TestNetworkPlugins/group/auto/Start 83.67
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.51
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 61.79
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
352 TestNetworkPlugins/group/auto/KubeletFlags 0.31
353 TestNetworkPlugins/group/auto/NetCatPod 11.28
354 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
355 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.13
356 TestNetworkPlugins/group/kindnet/Start 87.86
357 TestNetworkPlugins/group/auto/DNS 0.28
358 TestNetworkPlugins/group/auto/Localhost 0.23
359 TestNetworkPlugins/group/auto/HairPin 0.25
360 TestNetworkPlugins/group/calico/Start 66.64
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
364 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
365 TestNetworkPlugins/group/calico/KubeletFlags 0.29
366 TestNetworkPlugins/group/calico/NetCatPod 10.28
367 TestNetworkPlugins/group/kindnet/DNS 0.21
368 TestNetworkPlugins/group/kindnet/Localhost 0.18
369 TestNetworkPlugins/group/kindnet/HairPin 0.16
370 TestNetworkPlugins/group/calico/DNS 0.19
371 TestNetworkPlugins/group/calico/Localhost 0.18
372 TestNetworkPlugins/group/calico/HairPin 0.18
373 TestNetworkPlugins/group/custom-flannel/Start 68.65
374 TestNetworkPlugins/group/enable-default-cni/Start 87.83
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
377 TestNetworkPlugins/group/custom-flannel/DNS 0.19
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.38
382 TestNetworkPlugins/group/flannel/Start 70.03
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.49
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.32
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.26
386 TestNetworkPlugins/group/bridge/Start 75.93
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
389 TestNetworkPlugins/group/flannel/NetCatPod 11.28
390 TestNetworkPlugins/group/flannel/DNS 0.19
391 TestNetworkPlugins/group/flannel/Localhost 0.15
392 TestNetworkPlugins/group/flannel/HairPin 0.15
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
394 TestNetworkPlugins/group/bridge/NetCatPod 11.37
395 TestNetworkPlugins/group/bridge/DNS 0.17
396 TestNetworkPlugins/group/bridge/Localhost 0.15
397 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (5.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-468196 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-468196 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.80806191s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 11:17:47.232168  295113 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0908 11:17:47.232248  295113 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-468196
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-468196: exit status 85 (94.681166ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-468196 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-468196 │ jenkins │ v1.36.0 │ 08 Sep 25 11:17 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:17:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:17:41.478313  295118 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:17:41.478507  295118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:17:41.478540  295118 out.go:374] Setting ErrFile to fd 2...
	I0908 11:17:41.478583  295118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:17:41.478874  295118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
	W0908 11:17:41.479063  295118 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21512-293252/.minikube/config/config.json: open /home/jenkins/minikube-integration/21512-293252/.minikube/config/config.json: no such file or directory
	I0908 11:17:41.479532  295118 out.go:368] Setting JSON to true
	I0908 11:17:41.480418  295118 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3614,"bootTime":1757326648,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 11:17:41.480519  295118 start.go:140] virtualization:  
	I0908 11:17:41.484730  295118 out.go:99] [download-only-468196] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	W0908 11:17:41.484929  295118 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 11:17:41.484964  295118 notify.go:220] Checking for updates...
	I0908 11:17:41.487882  295118 out.go:171] MINIKUBE_LOCATION=21512
	I0908 11:17:41.490903  295118 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:17:41.493858  295118 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	I0908 11:17:41.496675  295118 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	I0908 11:17:41.499697  295118 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 11:17:41.505473  295118 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 11:17:41.505735  295118 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:17:41.533293  295118 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:17:41.533394  295118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:17:41.601463  295118 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 11:17:41.591385899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 11:17:41.601577  295118 docker.go:318] overlay module found
	I0908 11:17:41.604646  295118 out.go:99] Using the docker driver based on user configuration
	I0908 11:17:41.604693  295118 start.go:304] selected driver: docker
	I0908 11:17:41.604705  295118 start.go:918] validating driver "docker" against <nil>
	I0908 11:17:41.604822  295118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:17:41.671875  295118 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 11:17:41.662552247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 11:17:41.672072  295118 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 11:17:41.672362  295118 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 11:17:41.672524  295118 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 11:17:41.675533  295118 out.go:171] Using Docker driver with root privileges
	I0908 11:17:41.678421  295118 cni.go:84] Creating CNI manager for ""
	I0908 11:17:41.678504  295118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:17:41.678519  295118 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 11:17:41.678620  295118 start.go:348] cluster config:
	{Name:download-only-468196 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-468196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:17:41.681835  295118 out.go:99] Starting "download-only-468196" primary control-plane node in "download-only-468196" cluster
	I0908 11:17:41.681859  295118 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 11:17:41.684744  295118 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 11:17:41.684771  295118 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 11:17:41.684933  295118 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 11:17:41.701330  295118 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 11:17:41.701522  295118 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 11:17:41.701627  295118 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 11:17:41.744196  295118 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0908 11:17:41.744238  295118 cache.go:58] Caching tarball of preloaded images
	I0908 11:17:41.744435  295118 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 11:17:41.747865  295118 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 11:17:41.747897  295118 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 11:17:41.833668  295118 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0908 11:17:44.949709  295118 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 11:17:44.949907  295118 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 11:17:45.917421  295118 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0908 11:17:45.918046  295118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/download-only-468196/config.json ...
	I0908 11:17:45.918105  295118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/download-only-468196/config.json: {Name:mkfa393e7b7d8c2c91a4cc6260ced639a1b474c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:17:45.918323  295118 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 11:17:45.918571  295118 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21512-293252/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-468196 host does not exist
	  To start a cluster, run: "minikube start -p download-only-468196"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-468196
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-487825 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-487825 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.280258758s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 11:17:52.983078  295113 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0908 11:17:52.983118  295113 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-487825
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-487825: exit status 85 (166.977975ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-468196 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-468196 │ jenkins │ v1.36.0 │ 08 Sep 25 11:17 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 11:17 UTC │ 08 Sep 25 11:17 UTC │
	│ delete  │ -p download-only-468196                                                                                                                                                   │ download-only-468196 │ jenkins │ v1.36.0 │ 08 Sep 25 11:17 UTC │ 08 Sep 25 11:17 UTC │
	│ start   │ -o=json --download-only -p download-only-487825 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-487825 │ jenkins │ v1.36.0 │ 08 Sep 25 11:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:17:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:17:47.755029  295321 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:17:47.755207  295321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:17:47.755216  295321 out.go:374] Setting ErrFile to fd 2...
	I0908 11:17:47.755221  295321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:17:47.755489  295321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
	I0908 11:17:47.755918  295321 out.go:368] Setting JSON to true
	I0908 11:17:47.756772  295321 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3620,"bootTime":1757326648,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 11:17:47.756843  295321 start.go:140] virtualization:  
	I0908 11:17:47.760311  295321 out.go:99] [download-only-487825] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 11:17:47.760562  295321 notify.go:220] Checking for updates...
	I0908 11:17:47.763489  295321 out.go:171] MINIKUBE_LOCATION=21512
	I0908 11:17:47.766484  295321 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:17:47.769446  295321 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	I0908 11:17:47.772910  295321 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	I0908 11:17:47.775923  295321 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 11:17:47.781769  295321 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 11:17:47.782079  295321 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:17:47.811848  295321 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:17:47.811966  295321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:17:47.874707  295321 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-08 11:17:47.864796339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 11:17:47.874830  295321 docker.go:318] overlay module found
	I0908 11:17:47.877906  295321 out.go:99] Using the docker driver based on user configuration
	I0908 11:17:47.877951  295321 start.go:304] selected driver: docker
	I0908 11:17:47.877963  295321 start.go:918] validating driver "docker" against <nil>
	I0908 11:17:47.878070  295321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:17:47.944471  295321 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-08 11:17:47.934159669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 11:17:47.944629  295321 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 11:17:47.944927  295321 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 11:17:47.945104  295321 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 11:17:47.948276  295321 out.go:171] Using Docker driver with root privileges
	I0908 11:17:47.951126  295321 cni.go:84] Creating CNI manager for ""
	I0908 11:17:47.951235  295321 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:17:47.951250  295321 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 11:17:47.951351  295321 start.go:348] cluster config:
	{Name:download-only-487825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-487825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:17:47.954382  295321 out.go:99] Starting "download-only-487825" primary control-plane node in "download-only-487825" cluster
	I0908 11:17:47.954429  295321 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 11:17:47.957370  295321 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 11:17:47.957408  295321 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:17:47.957501  295321 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 11:17:47.973951  295321 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 11:17:47.974103  295321 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 11:17:47.974125  295321 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 11:17:47.974130  295321 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 11:17:47.974138  295321 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 11:17:48.014719  295321 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 11:17:48.014747  295321 cache.go:58] Caching tarball of preloaded images
	I0908 11:17:48.014933  295321 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:17:48.020894  295321 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0908 11:17:48.020936  295321 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 11:17:48.092923  295321 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:36555bb244eebf6e383c5e8810b48b3a -> /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 11:17:51.381504  295321 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 11:17:51.381620  295321 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21512-293252/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 11:17:52.326206  295321 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 11:17:52.326574  295321 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/download-only-487825/config.json ...
	I0908 11:17:52.326619  295321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/download-only-487825/config.json: {Name:mk5e878a3e556adf8254d9f3a5a5e91de77fcaeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:17:52.326810  295321 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:17:52.326964  295321 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21512-293252/.minikube/cache/linux/arm64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-487825 host does not exist
	  To start a cluster, run: "minikube start -p download-only-487825"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-487825
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 11:17:54.873126  295113 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-679593 --alsologtostderr --binary-mirror http://127.0.0.1:37151 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-679593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-679593
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-953262
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-953262: exit status 85 (81.612221ms)

                                                
                                                
-- stdout --
	* Profile "addons-953262" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-953262"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-953262
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-953262: exit status 85 (66.979446ms)

                                                
                                                
-- stdout --
	* Profile "addons-953262" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-953262"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (197.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-953262 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-953262 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m17.155423119s)
--- PASS: TestAddons/Setup (197.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-953262 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-953262 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.96s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-953262 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-953262 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dd7dea53-2bf2-4f9f-bde9-ad0b8e7636f5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dd7dea53-2bf2-4f9f-bde9-ad0b8e7636f5] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003453982s
addons_test.go:694: (dbg) Run:  kubectl --context addons-953262 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-953262 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-953262 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-953262 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 16.912814ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-r97cb" [6184ed08-56f5-465e-9229-22073ec7b0fe] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003723636s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-9sg29" [7781d699-5d8e-48fc-98fe-5024b3b9bed5] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003784412s
addons_test.go:392: (dbg) Run:  kubectl --context addons-953262 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-953262 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-953262 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.338938689s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 ip
2025/09/08 11:21:53 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (21.32s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.989749ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-953262
addons_test.go:332: (dbg) Run:  kubectl --context addons-953262 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qxmbl" [78b3684b-eed4-437d-8579-039f4a5ec4b1] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003433429s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.695237ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-vwpjc" [7b15b119-1e30-40fd-b4d1-cfdd8e730836] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003471081s
addons_test.go:463: (dbg) Run:  kubectl --context addons-953262 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 11:22:20.861025  295113 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 11:22:20.865227  295113 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 11:22:20.865260  295113 kapi.go:107] duration metric: took 7.655863ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.666448ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-953262 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-953262 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [928c3be9-9f36-429a-9c24-b749ff99f81e] Pending
helpers_test.go:352: "task-pv-pod" [928c3be9-9f36-429a-9c24-b749ff99f81e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [928c3be9-9f36-429a-9c24-b749ff99f81e] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004426069s
addons_test.go:572: (dbg) Run:  kubectl --context addons-953262 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-953262 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-953262 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-953262 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-953262 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-953262 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-953262 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [c37c3554-a0b5-461a-8762-50d649d0181b] Pending
helpers_test.go:352: "task-pv-pod-restore" [c37c3554-a0b5-461a-8762-50d649d0181b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [c37c3554-a0b5-461a-8762-50d649d0181b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003994298s
addons_test.go:614: (dbg) Run:  kubectl --context addons-953262 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-953262 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-953262 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-953262 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.844749121s)
--- PASS: TestAddons/parallel/CSI (58.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (28.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-953262 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-953262 --alsologtostderr -v=1: (1.026899385s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-zzqmp" [472976e6-115d-4a52-aff7-55acefeaa70c] Pending
helpers_test.go:352: "headlamp-6f46646d79-zzqmp" [472976e6-115d-4a52-aff7-55acefeaa70c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-zzqmp" [472976e6-115d-4a52-aff7-55acefeaa70c] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.007609314s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-953262 addons disable headlamp --alsologtostderr -v=1: (6.099429625s)
--- PASS: TestAddons/parallel/Headlamp (28.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-8k977" [250d7113-94e2-443a-bc49-60c1e9bfaa9e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002852966s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-953262 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-953262 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-953262 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [631b2847-e793-48bc-b9f4-ae41e3df1573] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [631b2847-e793-48bc-b9f4-ae41e3df1573] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [631b2847-e793-48bc-b9f4-ae41e3df1573] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003762497s
addons_test.go:967: (dbg) Run:  kubectl --context addons-953262 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 ssh "cat /opt/local-path-provisioner/pvc-377fa41e-0785-4a6e-bef9-eaaf481c512a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-953262 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-953262 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-953262 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.984040642s)
--- PASS: TestAddons/parallel/LocalPath (52.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qpzck" [5919a938-17d0-4a8f-bf0c-a3ad322bf9f6] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003246476s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-q5zjv" [e58746c8-ce12-49ef-bda0-afee6d4f6ee0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.028899034s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-953262 addons disable yakd --alsologtostderr -v=1: (6.174684884s)
--- PASS: TestAddons/parallel/Yakd (12.21s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-953262
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-953262: (11.923119743s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-953262
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-953262
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-953262
--- PASS: TestAddons/StoppedEnableDisable (12.22s)

                                                
                                    
x
+
TestCertOptions (32.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-489831 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-489831 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (29.859978219s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-489831 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-489831 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-489831 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-489831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-489831
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-489831: (1.99220151s)
--- PASS: TestCertOptions (32.58s)

                                                
                                    
x
+
TestCertExpiration (252.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-613519 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0908 12:15:56.679487  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-613519 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.446845383s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-613519 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-613519 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (31.35181695s)
helpers_test.go:175: Cleaning up "cert-expiration-613519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-613519
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-613519: (3.34960998s)
--- PASS: TestCertExpiration (252.15s)

                                                
                                    
x
+
TestForceSystemdFlag (45.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-754834 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0908 12:14:22.649766  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-754834 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.091295055s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-754834 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-754834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-754834
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-754834: (2.479914676s)
--- PASS: TestForceSystemdFlag (45.90s)

                                                
                                    
x
+
TestForceSystemdEnv (36.59s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-001420 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-001420 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.179939385s)
helpers_test.go:175: Cleaning up "force-systemd-env-001420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-001420
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-001420: (2.407425346s)
--- PASS: TestForceSystemdEnv (36.59s)

                                                
                                    
x
+
TestErrorSpam/setup (34.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-087522 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-087522 --driver=docker  --container-runtime=crio
E0908 11:26:13.611897  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:13.618912  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:13.630261  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:13.651621  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:13.693026  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:13.774487  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:13.936018  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:14.257715  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:14.899795  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:16.181099  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:18.743714  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:23.864999  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-087522 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-087522 --driver=docker  --container-runtime=crio: (34.666461748s)
--- PASS: TestErrorSpam/setup (34.67s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 unpause
--- PASS: TestErrorSpam/unpause (1.98s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 stop: (1.301831827s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-087522 --log_dir /tmp/nospam-087522 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21512-293252/.minikube/files/etc/test/nested/copy/295113/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-594147 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0908 11:26:54.587757  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:27:35.550143  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-594147 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.356598815s)
--- PASS: TestFunctional/serial/StartWithProxy (79.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 11:27:56.038641  295113 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-594147 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-594147 --alsologtostderr -v=8: (29.379784128s)
functional_test.go:678: soft start took 29.384115584s for "functional-594147" cluster.
I0908 11:28:25.418757  295113 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (29.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-594147 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 cache add registry.k8s.io/pause:3.1: (1.320014371s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 cache add registry.k8s.io/pause:3.3: (1.377495338s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 cache add registry.k8s.io/pause:latest: (1.353084994s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-594147 /tmp/TestFunctionalserialCacheCmdcacheadd_local3492292794/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 cache add minikube-local-cache-test:functional-594147
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 cache delete minikube-local-cache-test:functional-594147
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-594147
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.003091ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 cache reload: (1.112327672s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 kubectl -- --context functional-594147 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-594147 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-594147 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 11:28:57.472268  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-594147 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.527619931s)
functional_test.go:776: restart took 34.527723045s for "functional-594147" cluster.
I0908 11:29:08.549618  295113 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (34.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-594147 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 logs: (1.780776528s)
--- PASS: TestFunctional/serial/LogsCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 logs --file /tmp/TestFunctionalserialLogsFileCmd1178156523/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 logs --file /tmp/TestFunctionalserialLogsFileCmd1178156523/001/logs.txt: (1.783577262s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-594147 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-594147
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-594147: exit status 115 (400.752901ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31886 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-594147 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 config get cpus: exit status 14 (83.28836ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 config get cpus: exit status 14 (86.676527ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-594147 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-594147 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 325768: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-594147 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-594147 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (198.742876ms)

                                                
                                                
-- stdout --
	* [functional-594147] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:39:51.100060  325425 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:39:51.100224  325425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:39:51.100248  325425 out.go:374] Setting ErrFile to fd 2...
	I0908 11:39:51.100260  325425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:39:51.100557  325425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
	I0908 11:39:51.100990  325425 out.go:368] Setting JSON to false
	I0908 11:39:51.102064  325425 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4943,"bootTime":1757326648,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 11:39:51.102144  325425 start.go:140] virtualization:  
	I0908 11:39:51.105405  325425 out.go:179] * [functional-594147] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 11:39:51.108377  325425 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:39:51.108420  325425 notify.go:220] Checking for updates...
	I0908 11:39:51.116219  325425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:39:51.119078  325425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	I0908 11:39:51.122046  325425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	I0908 11:39:51.124959  325425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 11:39:51.127760  325425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:39:51.131080  325425 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:39:51.131699  325425 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:39:51.166599  325425 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:39:51.166737  325425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:39:51.226346  325425 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 11:39:51.216878031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 11:39:51.226455  325425 docker.go:318] overlay module found
	I0908 11:39:51.229589  325425 out.go:179] * Using the docker driver based on existing profile
	I0908 11:39:51.232688  325425 start.go:304] selected driver: docker
	I0908 11:39:51.232713  325425 start.go:918] validating driver "docker" against &{Name:functional-594147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-594147 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:39:51.232846  325425 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:39:51.236412  325425 out.go:203] 
	W0908 11:39:51.239458  325425 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 11:39:51.242442  325425 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-594147 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-594147 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-594147 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (219.047521ms)

                                                
                                                
-- stdout --
	* [functional-594147] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:39:50.889384  325380 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:39:50.889579  325380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:39:50.889610  325380 out.go:374] Setting ErrFile to fd 2...
	I0908 11:39:50.889634  325380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:39:50.891500  325380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
	I0908 11:39:50.891957  325380 out.go:368] Setting JSON to false
	I0908 11:39:50.892978  325380 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4943,"bootTime":1757326648,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 11:39:50.893089  325380 start.go:140] virtualization:  
	I0908 11:39:50.896800  325380 out.go:179] * [functional-594147] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0908 11:39:50.900692  325380 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:39:50.900776  325380 notify.go:220] Checking for updates...
	I0908 11:39:50.906523  325380 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:39:50.909407  325380 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	I0908 11:39:50.912443  325380 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	I0908 11:39:50.915669  325380 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 11:39:50.918827  325380 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:39:50.922437  325380 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:39:50.923002  325380 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:39:50.954453  325380 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:39:50.954580  325380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:39:51.027346  325380 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 11:39:51.017160961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 11:39:51.027528  325380 docker.go:318] overlay module found
	I0908 11:39:51.030885  325380 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 11:39:51.033731  325380 start.go:304] selected driver: docker
	I0908 11:39:51.033751  325380 start.go:918] validating driver "docker" against &{Name:functional-594147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-594147 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:39:51.033943  325380 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:39:51.037411  325380 out.go:203] 
	W0908 11:39:51.040400  325380 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 11:39:51.043193  325380 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [265f5f0f-6ea1-4e42-af86-f48e3f73fd9d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003607647s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-594147 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-594147 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-594147 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-594147 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [426c1f79-4e8f-4e81-a808-176b8a5f3faf] Pending
helpers_test.go:352: "sp-pod" [426c1f79-4e8f-4e81-a808-176b8a5f3faf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [426c1f79-4e8f-4e81-a808-176b8a5f3faf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.002933747s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-594147 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-594147 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-594147 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [38961c8e-d48d-47d2-9e49-cc5086870c63] Pending
helpers_test.go:352: "sp-pod" [38961c8e-d48d-47d2-9e49-cc5086870c63] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003725473s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-594147 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh -n functional-594147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 cp functional-594147:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd496239909/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh -n functional-594147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh -n functional-594147 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/295113/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "sudo cat /etc/test/nested/copy/295113/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/295113.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "sudo cat /etc/ssl/certs/295113.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/295113.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "sudo cat /usr/share/ca-certificates/295113.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2951132.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "sudo cat /etc/ssl/certs/2951132.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2951132.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "sudo cat /usr/share/ca-certificates/2951132.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-594147 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 ssh "sudo systemctl is-active docker": exit status 1 (406.30326ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 ssh "sudo systemctl is-active containerd": exit status 1 (326.310984ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 version --short
--- PASS: TestFunctional/parallel/Version/short (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 version -o=json --components: (1.68910944s)
--- PASS: TestFunctional/parallel/Version/components (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-594147 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-594147
localhost/kicbase/echo-server:functional-594147
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-594147 image ls --format short --alsologtostderr:
I0908 11:40:02.737388  326913 out.go:360] Setting OutFile to fd 1 ...
I0908 11:40:02.737590  326913 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:40:02.737616  326913 out.go:374] Setting ErrFile to fd 2...
I0908 11:40:02.737636  326913 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:40:02.737989  326913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
I0908 11:40:02.738670  326913 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:40:02.738864  326913 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:40:02.739383  326913 cli_runner.go:164] Run: docker container inspect functional-594147 --format={{.State.Status}}
I0908 11:40:02.766922  326913 ssh_runner.go:195] Run: systemctl --version
I0908 11:40:02.766974  326913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
I0908 11:40:02.786509  326913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
I0908 11:40:02.878880  326913 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-594147 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest             │ 47ef8710c9f5a │ 202MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/minikube-local-cache-test     │ functional-594147  │ 5c58ae1cfb884 │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ a25f5ef9c34c3 │ 51.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ d291939e99406 │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ 6fc32d66c1411 │ 75.9MB │
│ docker.io/library/nginx                 │ alpine             │ 35f3cbee4fb77 │ 54.3MB │
│ localhost/kicbase/echo-server           │ functional-594147  │ ce2d2cda2d858 │ 4.79MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ 996be7e86d9b3 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-594147 image ls --format table --alsologtostderr:
I0908 11:40:03.400255  327083 out.go:360] Setting OutFile to fd 1 ...
I0908 11:40:03.400486  327083 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:40:03.400500  327083 out.go:374] Setting ErrFile to fd 2...
I0908 11:40:03.400506  327083 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:40:03.400795  327083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
I0908 11:40:03.401444  327083 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:40:03.401576  327083 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:40:03.402136  327083 cli_runner.go:164] Run: docker container inspect functional-594147 --format={{.State.Status}}
I0908 11:40:03.426608  327083 ssh_runner.go:195] Run: systemctl --version
I0908 11:40:03.426672  327083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
I0908 11:40:03.447338  327083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
I0908 11:40:03.540504  327083 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-594147 image ls --format json --alsologtostderr:
[{"id":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"72629077"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5
a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d4
49841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"35f3cbee4fb77c3efb39f2723a21ce18
1906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54348302"},{"id":"47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758","repoDigests":["docker.io/library/nginx@sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708","docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57"],"repoTags":["docker.io/library/nginx:latest"],"size":"202036629"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-594147"],"size":"4788229"},{"id":"5c58ae1cfb88429cb0834a17b13a01c643c7339ac17aa5b55d9c21d97c125dda","repoDigests":[
"localhost/minikube-local-cache-test@sha256:5dcc8cb7489faacd80d963e7ba8f87cb2d4b5850787a1f1c03a3f24be4c0eec6"],"repoTags":["localhost/minikube-local-cache-test:functional-594147"],"size":"3330"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"84818927"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDige
sts":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"75938711"},{"id":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"51592021"},{"id":"
3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-594147 image ls --format json --alsologtostderr:
I0908 11:40:03.079682  326989 out.go:360] Setting OutFile to fd 1 ...
I0908 11:40:03.079881  326989 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:40:03.079907  326989 out.go:374] Setting ErrFile to fd 2...
I0908 11:40:03.079926  326989 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:40:03.080293  326989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
I0908 11:40:03.081044  326989 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:40:03.081247  326989 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:40:03.081848  326989 cli_runner.go:164] Run: docker container inspect functional-594147 --format={{.State.Status}}
I0908 11:40:03.136973  326989 ssh_runner.go:195] Run: systemctl --version
I0908 11:40:03.137133  326989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
I0908 11:40:03.156438  326989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
I0908 11:40:03.246382  326989 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-594147 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-594147
size: "4788229"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "51592021"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 5c58ae1cfb88429cb0834a17b13a01c643c7339ac17aa5b55d9c21d97c125dda
repoDigests:
- localhost/minikube-local-cache-test@sha256:5dcc8cb7489faacd80d963e7ba8f87cb2d4b5850787a1f1c03a3f24be4c0eec6
repoTags:
- localhost/minikube-local-cache-test:functional-594147
size: "3330"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac
repoTags:
- docker.io/library/nginx:alpine
size: "54348302"
- id: 47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758
repoDigests:
- docker.io/library/nginx@sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
repoTags:
- docker.io/library/nginx:latest
size: "202036629"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "84818927"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "75938711"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "72629077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-594147 image ls --format yaml --alsologtostderr:
I0908 11:40:02.759731  326923 out.go:360] Setting OutFile to fd 1 ...
I0908 11:40:02.759944  326923 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:40:02.759953  326923 out.go:374] Setting ErrFile to fd 2...
I0908 11:40:02.759958  326923 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:40:02.760299  326923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
I0908 11:40:02.761723  326923 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:40:02.763942  326923 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:40:02.764709  326923 cli_runner.go:164] Run: docker container inspect functional-594147 --format={{.State.Status}}
I0908 11:40:02.782601  326923 ssh_runner.go:195] Run: systemctl --version
I0908 11:40:02.782654  326923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
I0908 11:40:02.812202  326923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
I0908 11:40:02.902484  326923 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 ssh pgrep buildkitd: exit status 1 (345.367031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image build -t localhost/my-image:functional-594147 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 image build -t localhost/my-image:functional-594147 testdata/build --alsologtostderr: (3.393885282s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-594147 image build -t localhost/my-image:functional-594147 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4ca954eb097
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-594147
--> a06c28e1e67
Successfully tagged localhost/my-image:functional-594147
a06c28e1e67314f0328e2385528d2679b47c3066f9d489148a534225fe403ad2
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-594147 image build -t localhost/my-image:functional-594147 testdata/build --alsologtostderr:
I0908 11:40:03.348657  327077 out.go:360] Setting OutFile to fd 1 ...
I0908 11:40:03.349635  327077 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:40:03.349681  327077 out.go:374] Setting ErrFile to fd 2...
I0908 11:40:03.349709  327077 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:40:03.350126  327077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
I0908 11:40:03.350861  327077 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:40:03.351494  327077 config.go:182] Loaded profile config "functional-594147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:40:03.352049  327077 cli_runner.go:164] Run: docker container inspect functional-594147 --format={{.State.Status}}
I0908 11:40:03.381029  327077 ssh_runner.go:195] Run: systemctl --version
I0908 11:40:03.381102  327077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-594147
I0908 11:40:03.410580  327077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/functional-594147/id_rsa Username:docker}
I0908 11:40:03.506759  327077 build_images.go:161] Building image from path: /tmp/build.356512193.tar
I0908 11:40:03.506826  327077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 11:40:03.516206  327077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.356512193.tar
I0908 11:40:03.520763  327077 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.356512193.tar: stat -c "%s %y" /var/lib/minikube/build/build.356512193.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.356512193.tar': No such file or directory
I0908 11:40:03.520791  327077 ssh_runner.go:362] scp /tmp/build.356512193.tar --> /var/lib/minikube/build/build.356512193.tar (3072 bytes)
I0908 11:40:03.550205  327077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.356512193
I0908 11:40:03.561204  327077 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.356512193 -xf /var/lib/minikube/build/build.356512193.tar
I0908 11:40:03.571598  327077 crio.go:315] Building image: /var/lib/minikube/build/build.356512193
I0908 11:40:03.571672  327077 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-594147 /var/lib/minikube/build/build.356512193 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0908 11:40:06.644322  327077 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-594147 /var/lib/minikube/build/build.356512193 --cgroup-manager=cgroupfs: (3.072625254s)
I0908 11:40:06.644393  327077 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.356512193
I0908 11:40:06.653530  327077 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.356512193.tar
I0908 11:40:06.662984  327077 build_images.go:217] Built localhost/my-image:functional-594147 from /tmp/build.356512193.tar
I0908 11:40:06.663020  327077 build_images.go:133] succeeded building to: functional-594147
I0908 11:40:06.663026  327077 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-594147
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image load --daemon kicbase/echo-server:functional-594147 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 image load --daemon kicbase/echo-server:functional-594147 --alsologtostderr: (1.353546701s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image load --daemon kicbase/echo-server:functional-594147 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-594147
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image load --daemon kicbase/echo-server:functional-594147 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 image load --daemon kicbase/echo-server:functional-594147 --alsologtostderr: (1.101228229s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-594147 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-594147 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-594147 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-594147 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 321811: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-594147 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-594147 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [e670a100-7a0d-4afb-8b97-e62db9836aab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [e670a100-7a0d-4afb-8b97-e62db9836aab] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003683046s
I0908 11:29:32.097750  295113 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image save kicbase/echo-server:functional-594147 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image rm kicbase/echo-server:functional-594147 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr: (1.007408347s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-594147
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 image save --daemon kicbase/echo-server:functional-594147 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-594147
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-594147 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.8.149 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-594147 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "366.648866ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "63.126895ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "369.581235ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "63.379918ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-594147 /tmp/TestFunctionalparallelMountCmdany-port885170572/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757331577761257685" to /tmp/TestFunctionalparallelMountCmdany-port885170572/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757331577761257685" to /tmp/TestFunctionalparallelMountCmdany-port885170572/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757331577761257685" to /tmp/TestFunctionalparallelMountCmdany-port885170572/001/test-1757331577761257685
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.290467ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 11:39:38.117715  295113 retry.go:31] will retry after 658.006316ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 11:39 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 11:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 11:39 test-1757331577761257685
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh cat /mount-9p/test-1757331577761257685
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-594147 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [ecb19efa-0ed2-42b8-8092-d82c41201625] Pending
helpers_test.go:352: "busybox-mount" [ecb19efa-0ed2-42b8-8092-d82c41201625] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [ecb19efa-0ed2-42b8-8092-d82c41201625] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [ecb19efa-0ed2-42b8-8092-d82c41201625] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003376666s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-594147 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-594147 /tmp/TestFunctionalparallelMountCmdany-port885170572/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-594147 /tmp/TestFunctionalparallelMountCmdspecific-port1710135393/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (365.682857ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 11:39:47.178939  295113 retry.go:31] will retry after 275.231606ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-594147 /tmp/TestFunctionalparallelMountCmdspecific-port1710135393/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-594147 ssh "sudo umount -f /mount-9p": exit status 1 (281.514081ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-594147 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-594147 /tmp/TestFunctionalparallelMountCmdspecific-port1710135393/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-594147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup608616648/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-594147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup608616648/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-594147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup608616648/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-594147 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-594147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup608616648/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-594147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup608616648/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-594147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup608616648/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 service list: (1.395141742s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-594147 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-594147 service list -o json: (1.450027226s)
functional_test.go:1504: Took "1.450109532s" to run "out/minikube-linux-arm64 -p functional-594147 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.45s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-594147
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-594147
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-594147
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (191.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0908 11:41:13.603111  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:42:36.675994  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m10.410696141s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (191.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 kubectl -- rollout status deployment/busybox: (5.728548885s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-5g8lb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-fwlk2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-n4vkp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-5g8lb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-fwlk2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-n4vkp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-5g8lb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-fwlk2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-n4vkp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-5g8lb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-5g8lb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-fwlk2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-fwlk2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-n4vkp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 kubectl -- exec busybox-7b57f96db7-n4vkp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 node add --alsologtostderr -v 5
E0908 11:44:22.649198  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:44:22.655509  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:44:22.666980  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:44:22.688485  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:44:22.729947  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:44:22.811495  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:44:22.973008  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:44:23.294682  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:44:23.936797  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:44:25.218394  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:44:27.780742  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 node add --alsologtostderr -v 5: (58.536200189s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5: (1.037689472s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-678995 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.030333313s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 status --output json --alsologtostderr -v 5
E0908 11:44:32.902397  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 status --output json --alsologtostderr -v 5: (1.0308777s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp testdata/cp-test.txt ha-678995:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile907952104/001/cp-test_ha-678995.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995:/home/docker/cp-test.txt ha-678995-m02:/home/docker/cp-test_ha-678995_ha-678995-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m02 "sudo cat /home/docker/cp-test_ha-678995_ha-678995-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995:/home/docker/cp-test.txt ha-678995-m03:/home/docker/cp-test_ha-678995_ha-678995-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m03 "sudo cat /home/docker/cp-test_ha-678995_ha-678995-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995:/home/docker/cp-test.txt ha-678995-m04:/home/docker/cp-test_ha-678995_ha-678995-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m04 "sudo cat /home/docker/cp-test_ha-678995_ha-678995-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp testdata/cp-test.txt ha-678995-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile907952104/001/cp-test_ha-678995-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m02:/home/docker/cp-test.txt ha-678995:/home/docker/cp-test_ha-678995-m02_ha-678995.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995 "sudo cat /home/docker/cp-test_ha-678995-m02_ha-678995.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m02:/home/docker/cp-test.txt ha-678995-m03:/home/docker/cp-test_ha-678995-m02_ha-678995-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m03 "sudo cat /home/docker/cp-test_ha-678995-m02_ha-678995-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m02:/home/docker/cp-test.txt ha-678995-m04:/home/docker/cp-test_ha-678995-m02_ha-678995-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m04 "sudo cat /home/docker/cp-test_ha-678995-m02_ha-678995-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp testdata/cp-test.txt ha-678995-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m03 "sudo cat /home/docker/cp-test.txt"
E0908 11:44:43.144575  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile907952104/001/cp-test_ha-678995-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m03:/home/docker/cp-test.txt ha-678995:/home/docker/cp-test_ha-678995-m03_ha-678995.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995 "sudo cat /home/docker/cp-test_ha-678995-m03_ha-678995.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m03:/home/docker/cp-test.txt ha-678995-m02:/home/docker/cp-test_ha-678995-m03_ha-678995-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m02 "sudo cat /home/docker/cp-test_ha-678995-m03_ha-678995-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m03:/home/docker/cp-test.txt ha-678995-m04:/home/docker/cp-test_ha-678995-m03_ha-678995-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m04 "sudo cat /home/docker/cp-test_ha-678995-m03_ha-678995-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp testdata/cp-test.txt ha-678995-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile907952104/001/cp-test_ha-678995-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m04:/home/docker/cp-test.txt ha-678995:/home/docker/cp-test_ha-678995-m04_ha-678995.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995 "sudo cat /home/docker/cp-test_ha-678995-m04_ha-678995.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m04:/home/docker/cp-test.txt ha-678995-m02:/home/docker/cp-test_ha-678995-m04_ha-678995-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m02 "sudo cat /home/docker/cp-test_ha-678995-m04_ha-678995-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 cp ha-678995-m04:/home/docker/cp-test.txt ha-678995-m03:/home/docker/cp-test_ha-678995-m04_ha-678995-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 ssh -n ha-678995-m03 "sudo cat /home/docker/cp-test_ha-678995-m04_ha-678995-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 node stop m02 --alsologtostderr -v 5
E0908 11:45:03.626200  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 node stop m02 --alsologtostderr -v 5: (11.955387107s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5: exit status 7 (766.772158ms)

                                                
                                                
-- stdout --
	ha-678995
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-678995-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-678995-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-678995-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:45:04.165530  342919 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:45:04.165714  342919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:45:04.165725  342919 out.go:374] Setting ErrFile to fd 2...
	I0908 11:45:04.165731  342919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:45:04.166008  342919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
	I0908 11:45:04.166211  342919 out.go:368] Setting JSON to false
	I0908 11:45:04.166248  342919 mustload.go:65] Loading cluster: ha-678995
	I0908 11:45:04.166647  342919 config.go:182] Loaded profile config "ha-678995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:45:04.166663  342919 status.go:174] checking status of ha-678995 ...
	I0908 11:45:04.167199  342919 cli_runner.go:164] Run: docker container inspect ha-678995 --format={{.State.Status}}
	I0908 11:45:04.168160  342919 notify.go:220] Checking for updates...
	I0908 11:45:04.187298  342919 status.go:371] ha-678995 host status = "Running" (err=<nil>)
	I0908 11:45:04.187319  342919 host.go:66] Checking if "ha-678995" exists ...
	I0908 11:45:04.187620  342919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-678995
	I0908 11:45:04.209537  342919 host.go:66] Checking if "ha-678995" exists ...
	I0908 11:45:04.209864  342919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:45:04.209932  342919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-678995
	I0908 11:45:04.229640  342919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/ha-678995/id_rsa Username:docker}
	I0908 11:45:04.319804  342919 ssh_runner.go:195] Run: systemctl --version
	I0908 11:45:04.324969  342919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:45:04.339585  342919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:45:04.429845  342919 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-08 11:45:04.414117596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 11:45:04.430428  342919 kubeconfig.go:125] found "ha-678995" server: "https://192.168.49.254:8443"
	I0908 11:45:04.430464  342919 api_server.go:166] Checking apiserver status ...
	I0908 11:45:04.430515  342919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:45:04.447693  342919 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	I0908 11:45:04.458765  342919 api_server.go:182] apiserver freezer: "6:freezer:/docker/aaecfd7d132d577236fce25d5b015acaae742a6c7f9f2a45c22887134c1fcb90/crio/crio-3c1e50ee4e7b8886a400f7de39d6b89e1147ab26f9472b0709eb4d4ca8e34a9d"
	I0908 11:45:04.458834  342919 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/aaecfd7d132d577236fce25d5b015acaae742a6c7f9f2a45c22887134c1fcb90/crio/crio-3c1e50ee4e7b8886a400f7de39d6b89e1147ab26f9472b0709eb4d4ca8e34a9d/freezer.state
	I0908 11:45:04.469223  342919 api_server.go:204] freezer state: "THAWED"
	I0908 11:45:04.469249  342919 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 11:45:04.477995  342919 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 11:45:04.478029  342919 status.go:463] ha-678995 apiserver status = Running (err=<nil>)
	I0908 11:45:04.478042  342919 status.go:176] ha-678995 status: &{Name:ha-678995 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:45:04.478059  342919 status.go:174] checking status of ha-678995-m02 ...
	I0908 11:45:04.478361  342919 cli_runner.go:164] Run: docker container inspect ha-678995-m02 --format={{.State.Status}}
	I0908 11:45:04.497319  342919 status.go:371] ha-678995-m02 host status = "Stopped" (err=<nil>)
	I0908 11:45:04.497354  342919 status.go:384] host is not running, skipping remaining checks
	I0908 11:45:04.497362  342919 status.go:176] ha-678995-m02 status: &{Name:ha-678995-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:45:04.497382  342919 status.go:174] checking status of ha-678995-m03 ...
	I0908 11:45:04.497709  342919 cli_runner.go:164] Run: docker container inspect ha-678995-m03 --format={{.State.Status}}
	I0908 11:45:04.516220  342919 status.go:371] ha-678995-m03 host status = "Running" (err=<nil>)
	I0908 11:45:04.516245  342919 host.go:66] Checking if "ha-678995-m03" exists ...
	I0908 11:45:04.516549  342919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-678995-m03
	I0908 11:45:04.533413  342919 host.go:66] Checking if "ha-678995-m03" exists ...
	I0908 11:45:04.533735  342919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:45:04.533825  342919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-678995-m03
	I0908 11:45:04.560641  342919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/ha-678995-m03/id_rsa Username:docker}
	I0908 11:45:04.647790  342919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:45:04.661526  342919 kubeconfig.go:125] found "ha-678995" server: "https://192.168.49.254:8443"
	I0908 11:45:04.661564  342919 api_server.go:166] Checking apiserver status ...
	I0908 11:45:04.661617  342919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:45:04.678945  342919 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1372/cgroup
	I0908 11:45:04.689915  342919 api_server.go:182] apiserver freezer: "6:freezer:/docker/bb006fd94f9e126db35dbe2dfa892c175c2ea630ba7904cc57675ad104b577e9/crio/crio-6f9315a743a5d5587c88b256b3fd086591ce65b3cc94449469cb462889063158"
	I0908 11:45:04.690017  342919 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bb006fd94f9e126db35dbe2dfa892c175c2ea630ba7904cc57675ad104b577e9/crio/crio-6f9315a743a5d5587c88b256b3fd086591ce65b3cc94449469cb462889063158/freezer.state
	I0908 11:45:04.699681  342919 api_server.go:204] freezer state: "THAWED"
	I0908 11:45:04.699752  342919 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 11:45:04.708298  342919 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 11:45:04.708389  342919 status.go:463] ha-678995-m03 apiserver status = Running (err=<nil>)
	I0908 11:45:04.708413  342919 status.go:176] ha-678995-m03 status: &{Name:ha-678995-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:45:04.708463  342919 status.go:174] checking status of ha-678995-m04 ...
	I0908 11:45:04.708854  342919 cli_runner.go:164] Run: docker container inspect ha-678995-m04 --format={{.State.Status}}
	I0908 11:45:04.726306  342919 status.go:371] ha-678995-m04 host status = "Running" (err=<nil>)
	I0908 11:45:04.726334  342919 host.go:66] Checking if "ha-678995-m04" exists ...
	I0908 11:45:04.726634  342919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-678995-m04
	I0908 11:45:04.745298  342919 host.go:66] Checking if "ha-678995-m04" exists ...
	I0908 11:45:04.745747  342919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:45:04.745836  342919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-678995-m04
	I0908 11:45:04.766926  342919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/ha-678995-m04/id_rsa Username:docker}
	I0908 11:45:04.859015  342919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:45:04.870855  342919 status.go:176] ha-678995-m04 status: &{Name:ha-678995-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 node start m02 --alsologtostderr -v 5: (29.228899946s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5: (1.410267242s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.221447496s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (126.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 stop --alsologtostderr -v 5
E0908 11:45:44.587801  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 stop --alsologtostderr -v 5: (27.126164859s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 start --wait true --alsologtostderr -v 5
E0908 11:46:13.603331  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:47:06.509632  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 start --wait true --alsologtostderr -v 5: (1m38.958530148s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (126.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 node delete m03 --alsologtostderr -v 5: (11.63594085s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 stop --alsologtostderr -v 5: (35.692114345s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5: exit status 7 (138.109879ms)

                                                
                                                
-- stdout --
	ha-678995
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-678995-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-678995-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:48:33.065483  356817 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:48:33.065722  356817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:48:33.065736  356817 out.go:374] Setting ErrFile to fd 2...
	I0908 11:48:33.065741  356817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:48:33.066046  356817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
	I0908 11:48:33.066280  356817 out.go:368] Setting JSON to false
	I0908 11:48:33.066337  356817 mustload.go:65] Loading cluster: ha-678995
	I0908 11:48:33.066439  356817 notify.go:220] Checking for updates...
	I0908 11:48:33.066827  356817 config.go:182] Loaded profile config "ha-678995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:48:33.066854  356817 status.go:174] checking status of ha-678995 ...
	I0908 11:48:33.067409  356817 cli_runner.go:164] Run: docker container inspect ha-678995 --format={{.State.Status}}
	I0908 11:48:33.087777  356817 status.go:371] ha-678995 host status = "Stopped" (err=<nil>)
	I0908 11:48:33.087803  356817 status.go:384] host is not running, skipping remaining checks
	I0908 11:48:33.087811  356817 status.go:176] ha-678995 status: &{Name:ha-678995 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:48:33.087842  356817 status.go:174] checking status of ha-678995-m02 ...
	I0908 11:48:33.088140  356817 cli_runner.go:164] Run: docker container inspect ha-678995-m02 --format={{.State.Status}}
	I0908 11:48:33.115043  356817 status.go:371] ha-678995-m02 host status = "Stopped" (err=<nil>)
	I0908 11:48:33.115070  356817 status.go:384] host is not running, skipping remaining checks
	I0908 11:48:33.115077  356817 status.go:176] ha-678995-m02 status: &{Name:ha-678995-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:48:33.115097  356817 status.go:174] checking status of ha-678995-m04 ...
	I0908 11:48:33.115380  356817 cli_runner.go:164] Run: docker container inspect ha-678995-m04 --format={{.State.Status}}
	I0908 11:48:33.136059  356817 status.go:371] ha-678995-m04 host status = "Stopped" (err=<nil>)
	I0908 11:48:33.136081  356817 status.go:384] host is not running, skipping remaining checks
	I0908 11:48:33.136089  356817 status.go:176] ha-678995-m04 status: &{Name:ha-678995-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (86.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0908 11:49:22.649937  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:49:50.351998  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m25.136219193s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (86.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 node add --control-plane --alsologtostderr -v 5
E0908 11:51:13.603256  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 node add --control-plane --alsologtostderr -v 5: (1m20.04475542s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-678995 status --alsologtostderr -v 5: (1.061707522s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-382511 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-382511 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.586778198s)
--- PASS: TestJSONOutput/start/Command (80.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-382511 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-382511 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-382511 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-382511 --output=json --user=testUser: (5.862706513s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-901879 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-901879 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.491599ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"189f50c5-9952-42af-9bf5-57930ed99aea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-901879] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"73cd5062-6228-48ce-9e1c-515caacdefa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21512"}}
	{"specversion":"1.0","id":"cd79cee7-cff7-400a-8791-17bd781d406f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7fa0358e-7fb3-40fd-81b3-f29524e5bf55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig"}}
	{"specversion":"1.0","id":"b6c19748-8c15-42b2-8875-7d5c53e04396","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube"}}
	{"specversion":"1.0","id":"308604e5-df0e-4208-abd4-d11853cbec61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"afb028fe-cd52-4a33-bf68-8168121ff668","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dffc295d-0d63-466b-859a-57461f361f37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-901879" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-901879
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-638015 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-638015 --network=: (38.290992288s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-638015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-638015
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-638015: (2.071514427s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.39s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.58s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-797235 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-797235 --network=bridge: (30.568047301s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-797235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-797235
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-797235: (1.985665903s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.58s)

                                                
                                    
x
+
TestKicExistingNetwork (32.32s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 11:54:15.583043  295113 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 11:54:15.597883  295113 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 11:54:15.597970  295113 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 11:54:15.597989  295113 cli_runner.go:164] Run: docker network inspect existing-network
W0908 11:54:15.616959  295113 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 11:54:15.616992  295113 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 11:54:15.617011  295113 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 11:54:15.617131  295113 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 11:54:15.635807  295113 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7ffe02b033b4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:96:ff:83:22:c7:61} reservation:<nil>}
I0908 11:54:15.636113  295113 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b0d590}
I0908 11:54:15.636135  295113 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 11:54:15.636188  295113 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 11:54:15.692027  295113 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-049340 --network=existing-network
E0908 11:54:22.653935  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-049340 --network=existing-network: (30.141100782s)
helpers_test.go:175: Cleaning up "existing-network-049340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-049340
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-049340: (2.03709789s)
I0908 11:54:47.886793  295113 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.32s)

                                                
                                    
x
+
TestKicCustomSubnet (38.18s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-491710 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-491710 --subnet=192.168.60.0/24: (35.976263238s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-491710 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-491710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-491710
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-491710: (2.171083034s)
--- PASS: TestKicCustomSubnet (38.18s)

                                                
                                    
x
+
TestKicStaticIP (38.14s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-523239 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-523239 --static-ip=192.168.200.200: (35.803934187s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-523239 ip
helpers_test.go:175: Cleaning up "static-ip-523239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-523239
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-523239: (2.168785242s)
--- PASS: TestKicStaticIP (38.14s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (64.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-204709 --driver=docker  --container-runtime=crio
E0908 11:56:13.602511  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-204709 --driver=docker  --container-runtime=crio: (28.76715083s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-207228 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-207228 --driver=docker  --container-runtime=crio: (30.251381813s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-204709
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-207228
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-207228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-207228
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-207228: (2.053595499s)
helpers_test.go:175: Cleaning up "first-204709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-204709
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-204709: (1.950393827s)
--- PASS: TestMinikubeProfile (64.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-294912 --memory=3072 --mount-string /tmp/TestMountStartserial2026087029/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-294912 --memory=3072 --mount-string /tmp/TestMountStartserial2026087029/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.601823798s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-294912 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-296750 --memory=3072 --mount-string /tmp/TestMountStartserial2026087029/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-296750 --memory=3072 --mount-string /tmp/TestMountStartserial2026087029/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.238586959s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-296750 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-294912 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-294912 --alsologtostderr -v=5: (1.629965194s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-296750 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-296750
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-296750: (1.204309226s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.38s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-296750
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-296750: (6.374900889s)
--- PASS: TestMountStart/serial/RestartStopped (7.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-296750 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (135.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-809894 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0908 11:59:16.677805  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:22.649403  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-809894 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m14.649326817s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (135.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-809894 -- rollout status deployment/busybox: (4.552815498s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- exec busybox-7b57f96db7-bnrdt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- exec busybox-7b57f96db7-m2mv7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- exec busybox-7b57f96db7-bnrdt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- exec busybox-7b57f96db7-m2mv7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- exec busybox-7b57f96db7-bnrdt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- exec busybox-7b57f96db7-m2mv7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- exec busybox-7b57f96db7-bnrdt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- exec busybox-7b57f96db7-bnrdt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:583: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-809894 -- exec busybox-7b57f96db7-bnrdt -- sh -c "ping -c 1 192.168.67.1": (1.006045934s)
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- exec busybox-7b57f96db7-m2mv7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809894 -- exec busybox-7b57f96db7-m2mv7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-809894 -v=5 --alsologtostderr
E0908 12:00:45.713286  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-809894 -v=5 --alsologtostderr: (57.446838551s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.43s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-809894 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp testdata/cp-test.txt multinode-809894:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp multinode-809894:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4235348732/001/cp-test_multinode-809894.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp multinode-809894:/home/docker/cp-test.txt multinode-809894-m02:/home/docker/cp-test_multinode-809894_multinode-809894-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m02 "sudo cat /home/docker/cp-test_multinode-809894_multinode-809894-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp multinode-809894:/home/docker/cp-test.txt multinode-809894-m03:/home/docker/cp-test_multinode-809894_multinode-809894-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m03 "sudo cat /home/docker/cp-test_multinode-809894_multinode-809894-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp testdata/cp-test.txt multinode-809894-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp multinode-809894-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4235348732/001/cp-test_multinode-809894-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp multinode-809894-m02:/home/docker/cp-test.txt multinode-809894:/home/docker/cp-test_multinode-809894-m02_multinode-809894.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894 "sudo cat /home/docker/cp-test_multinode-809894-m02_multinode-809894.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp multinode-809894-m02:/home/docker/cp-test.txt multinode-809894-m03:/home/docker/cp-test_multinode-809894-m02_multinode-809894-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m03 "sudo cat /home/docker/cp-test_multinode-809894-m02_multinode-809894-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp testdata/cp-test.txt multinode-809894-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp multinode-809894-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4235348732/001/cp-test_multinode-809894-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp multinode-809894-m03:/home/docker/cp-test.txt multinode-809894:/home/docker/cp-test_multinode-809894-m03_multinode-809894.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894 "sudo cat /home/docker/cp-test_multinode-809894-m03_multinode-809894.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 cp multinode-809894-m03:/home/docker/cp-test.txt multinode-809894-m02:/home/docker/cp-test_multinode-809894-m03_multinode-809894-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 ssh -n multinode-809894-m02 "sudo cat /home/docker/cp-test_multinode-809894-m03_multinode-809894-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-809894 node stop m03: (1.232040869s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 status
E0908 12:01:13.603174  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-809894 status: exit status 7 (552.047681ms)

                                                
                                                
-- stdout --
	multinode-809894
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-809894-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-809894-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-809894 status --alsologtostderr: exit status 7 (516.642455ms)

                                                
                                                
-- stdout --
	multinode-809894
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-809894-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-809894-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:01:13.711620  410097 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:01:13.711846  410097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:01:13.711862  410097 out.go:374] Setting ErrFile to fd 2...
	I0908 12:01:13.711869  410097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:01:13.712160  410097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
	I0908 12:01:13.712415  410097 out.go:368] Setting JSON to false
	I0908 12:01:13.712465  410097 mustload.go:65] Loading cluster: multinode-809894
	I0908 12:01:13.712570  410097 notify.go:220] Checking for updates...
	I0908 12:01:13.712929  410097 config.go:182] Loaded profile config "multinode-809894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:01:13.712954  410097 status.go:174] checking status of multinode-809894 ...
	I0908 12:01:13.713961  410097 cli_runner.go:164] Run: docker container inspect multinode-809894 --format={{.State.Status}}
	I0908 12:01:13.733429  410097 status.go:371] multinode-809894 host status = "Running" (err=<nil>)
	I0908 12:01:13.733458  410097 host.go:66] Checking if "multinode-809894" exists ...
	I0908 12:01:13.733841  410097 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-809894
	I0908 12:01:13.761586  410097 host.go:66] Checking if "multinode-809894" exists ...
	I0908 12:01:13.761934  410097 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:01:13.761979  410097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-809894
	I0908 12:01:13.780387  410097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33274 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/multinode-809894/id_rsa Username:docker}
	I0908 12:01:13.871674  410097 ssh_runner.go:195] Run: systemctl --version
	I0908 12:01:13.877807  410097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:01:13.889554  410097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:01:13.950991  410097 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 12:01:13.941016077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:01:13.951571  410097 kubeconfig.go:125] found "multinode-809894" server: "https://192.168.67.2:8443"
	I0908 12:01:13.951609  410097 api_server.go:166] Checking apiserver status ...
	I0908 12:01:13.951664  410097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:01:13.963227  410097 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	I0908 12:01:13.973907  410097 api_server.go:182] apiserver freezer: "6:freezer:/docker/a3589bc6c0a05d25c29bec7fe7677beb4a27c419db67fdfb373c813ddb9566d8/crio/crio-f2e730a6d76d9cd674275568ffcf5eaf1ee5e4c9543fd6afdb3cec01d18c348f"
	I0908 12:01:13.973973  410097 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a3589bc6c0a05d25c29bec7fe7677beb4a27c419db67fdfb373c813ddb9566d8/crio/crio-f2e730a6d76d9cd674275568ffcf5eaf1ee5e4c9543fd6afdb3cec01d18c348f/freezer.state
	I0908 12:01:13.984810  410097 api_server.go:204] freezer state: "THAWED"
	I0908 12:01:13.984842  410097 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 12:01:13.993237  410097 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 12:01:13.993267  410097 status.go:463] multinode-809894 apiserver status = Running (err=<nil>)
	I0908 12:01:13.993279  410097 status.go:176] multinode-809894 status: &{Name:multinode-809894 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:01:13.993327  410097 status.go:174] checking status of multinode-809894-m02 ...
	I0908 12:01:13.993678  410097 cli_runner.go:164] Run: docker container inspect multinode-809894-m02 --format={{.State.Status}}
	I0908 12:01:14.014392  410097 status.go:371] multinode-809894-m02 host status = "Running" (err=<nil>)
	I0908 12:01:14.014420  410097 host.go:66] Checking if "multinode-809894-m02" exists ...
	I0908 12:01:14.014751  410097 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-809894-m02
	I0908 12:01:14.033296  410097 host.go:66] Checking if "multinode-809894-m02" exists ...
	I0908 12:01:14.033618  410097 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:01:14.033670  410097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-809894-m02
	I0908 12:01:14.052878  410097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/21512-293252/.minikube/machines/multinode-809894-m02/id_rsa Username:docker}
	I0908 12:01:14.139342  410097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:01:14.151301  410097 status.go:176] multinode-809894-m02 status: &{Name:multinode-809894-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:01:14.151338  410097 status.go:174] checking status of multinode-809894-m03 ...
	I0908 12:01:14.151656  410097 cli_runner.go:164] Run: docker container inspect multinode-809894-m03 --format={{.State.Status}}
	I0908 12:01:14.169251  410097 status.go:371] multinode-809894-m03 host status = "Stopped" (err=<nil>)
	I0908 12:01:14.169277  410097 status.go:384] host is not running, skipping remaining checks
	I0908 12:01:14.169285  410097 status.go:176] multinode-809894-m03 status: &{Name:multinode-809894-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-809894 node start m03 -v=5 --alsologtostderr: (6.898897123s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (75.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-809894
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-809894
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-809894: (24.842265395s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-809894 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-809894 --wait=true -v=5 --alsologtostderr: (50.751452237s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-809894
--- PASS: TestMultiNode/serial/RestartKeepsNodes (75.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-809894 node delete m03: (4.854213692s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-809894 stop: (23.722433491s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-809894 status: exit status 7 (99.175391ms)

                                                
                                                
-- stdout --
	multinode-809894
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-809894-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-809894 status --alsologtostderr: exit status 7 (96.043984ms)

                                                
                                                
-- stdout --
	multinode-809894
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-809894-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:03:06.956231  417990 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:03:06.956348  417990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:03:06.956359  417990 out.go:374] Setting ErrFile to fd 2...
	I0908 12:03:06.956365  417990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:03:06.956632  417990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
	I0908 12:03:06.956848  417990 out.go:368] Setting JSON to false
	I0908 12:03:06.956906  417990 mustload.go:65] Loading cluster: multinode-809894
	I0908 12:03:06.956994  417990 notify.go:220] Checking for updates...
	I0908 12:03:06.957307  417990 config.go:182] Loaded profile config "multinode-809894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:03:06.957330  417990 status.go:174] checking status of multinode-809894 ...
	I0908 12:03:06.958200  417990 cli_runner.go:164] Run: docker container inspect multinode-809894 --format={{.State.Status}}
	I0908 12:03:06.975898  417990 status.go:371] multinode-809894 host status = "Stopped" (err=<nil>)
	I0908 12:03:06.975923  417990 status.go:384] host is not running, skipping remaining checks
	I0908 12:03:06.975943  417990 status.go:176] multinode-809894 status: &{Name:multinode-809894 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:03:06.975976  417990 status.go:174] checking status of multinode-809894-m02 ...
	I0908 12:03:06.976283  417990 cli_runner.go:164] Run: docker container inspect multinode-809894-m02 --format={{.State.Status}}
	I0908 12:03:06.999333  417990 status.go:371] multinode-809894-m02 host status = "Stopped" (err=<nil>)
	I0908 12:03:06.999367  417990 status.go:384] host is not running, skipping remaining checks
	I0908 12:03:06.999375  417990 status.go:176] multinode-809894-m02 status: &{Name:multinode-809894-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-809894 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-809894 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (55.934755996s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809894 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.61s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-809894
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-809894-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-809894-m02 --driver=docker  --container-runtime=crio: exit status 14 (104.347015ms)

                                                
                                                
-- stdout --
	* [multinode-809894-m02] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-809894-m02' is duplicated with machine name 'multinode-809894-m02' in profile 'multinode-809894'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-809894-m03 --driver=docker  --container-runtime=crio
E0908 12:04:22.649185  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-809894-m03 --driver=docker  --container-runtime=crio: (32.559525269s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-809894
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-809894: exit status 80 (360.444214ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-809894 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-809894-m03 already exists in multinode-809894-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-809894-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-809894-m03: (1.997160901s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.08s)

                                                
                                    
x
+
TestPreload (125.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-709098 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-709098 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m1.269833104s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-709098 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-709098 image pull gcr.io/k8s-minikube/busybox: (3.693221896s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-709098
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-709098: (5.778663363s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-709098 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0908 12:06:13.603100  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-709098 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (52.057037558s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-709098 image list
helpers_test.go:175: Cleaning up "test-preload-709098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-709098
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-709098: (2.37475641s)
--- PASS: TestPreload (125.42s)

                                                
                                    
x
+
TestScheduledStopUnix (112.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-737260 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-737260 --memory=3072 --driver=docker  --container-runtime=crio: (36.207740856s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-737260 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-737260 -n scheduled-stop-737260
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-737260 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 12:07:25.039660  295113 retry.go:31] will retry after 132.351µs: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.045866  295113 retry.go:31] will retry after 158.611µs: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.047154  295113 retry.go:31] will retry after 312.811µs: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.048458  295113 retry.go:31] will retry after 338.525µs: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.049052  295113 retry.go:31] will retry after 367.477µs: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.049593  295113 retry.go:31] will retry after 940.207µs: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.050778  295113 retry.go:31] will retry after 944.422µs: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.051892  295113 retry.go:31] will retry after 1.10442ms: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.054229  295113 retry.go:31] will retry after 3.690689ms: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.058470  295113 retry.go:31] will retry after 2.127377ms: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.061747  295113 retry.go:31] will retry after 3.087344ms: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.064941  295113 retry.go:31] will retry after 11.955197ms: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.078589  295113 retry.go:31] will retry after 13.774552ms: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.093035  295113 retry.go:31] will retry after 14.959077ms: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
I0908 12:07:25.109028  295113 retry.go:31] will retry after 40.83388ms: open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/scheduled-stop-737260/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-737260 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-737260 -n scheduled-stop-737260
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-737260
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-737260 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-737260
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-737260: exit status 7 (76.284708ms)

                                                
                                                
-- stdout --
	scheduled-stop-737260
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-737260 -n scheduled-stop-737260
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-737260 -n scheduled-stop-737260: exit status 7 (71.599879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-737260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-737260
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-737260: (4.255315769s)
--- PASS: TestScheduledStopUnix (112.09s)

                                                
                                    
x
+
TestInsufficientStorage (13.25s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-070165 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-070165 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.732576189s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d3d779d8-a428-4626-a426-9297eb16cfd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-070165] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f5c4337-0a40-4422-8c74-9b784beddef0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21512"}}
	{"specversion":"1.0","id":"1de1c213-06ab-4129-9e92-05fcfd482fe1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"71625aa0-1887-426d-b18f-3b062939d7cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig"}}
	{"specversion":"1.0","id":"ac5c7d6a-65ad-4254-b9af-27b55f3d7496","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube"}}
	{"specversion":"1.0","id":"db6ec1b6-d45e-45f5-b393-38c6e534398e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"90f65535-fcba-4fe6-a3ec-92ca276643cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"175c417c-3887-478f-bf2f-905c0aa40412","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fae0299a-b7d6-4923-8cbd-8a74fad95b8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9bb3232c-7572-4750-b4b8-cc397cd3f6eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"63f7ecad-ef60-428e-ab8a-87c1a4f15d0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4778f8dd-07bd-42fe-84d9-0f2a0dacd85e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-070165\" primary control-plane node in \"insufficient-storage-070165\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"316fca19-5d21-4204-9398-f5a8b07b924e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ef3f1f3-a5db-49a4-893f-5bbadbd87f98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"27ce6253-85ec-4373-8b78-b75974b8862b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-070165 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-070165 --output=json --layout=cluster: exit status 7 (299.863722ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-070165","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-070165","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 12:08:51.386281  435408 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-070165" does not appear in /home/jenkins/minikube-integration/21512-293252/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-070165 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-070165 --output=json --layout=cluster: exit status 7 (297.162091ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-070165","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-070165","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 12:08:51.683045  435471 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-070165" does not appear in /home/jenkins/minikube-integration/21512-293252/kubeconfig
	E0908 12:08:51.693608  435471 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/insufficient-storage-070165/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-070165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-070165
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-070165: (1.922666575s)
--- PASS: TestInsufficientStorage (13.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.7s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1820232645 start -p running-upgrade-318002 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1820232645 start -p running-upgrade-318002 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.26790219s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-318002 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-318002 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.285019543s)
helpers_test.go:175: Cleaning up "running-upgrade-318002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-318002
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-318002: (2.260799208s)
--- PASS: TestRunningBinaryUpgrade (64.70s)

                                                
                                    
x
+
TestKubernetesUpgrade (185.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-295936 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0908 12:11:13.603061  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-295936 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.734434895s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-295936
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-295936: (1.214566978s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-295936 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-295936 status --format={{.Host}}: exit status 7 (70.173084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-295936 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-295936 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m50.268711112s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-295936 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-295936 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-295936 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (136.577065ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-295936] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-295936
	    minikube start -p kubernetes-upgrade-295936 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2959362 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-295936 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-295936 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-295936 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.789342458s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-295936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-295936
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-295936: (2.816310435s)
--- PASS: TestKubernetesUpgrade (185.17s)

                                                
                                    
x
+
TestMissingContainerUpgrade (110.96s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.404639498 start -p missing-upgrade-596167 --memory=3072 --driver=docker  --container-runtime=crio
E0908 12:09:22.648993  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.404639498 start -p missing-upgrade-596167 --memory=3072 --driver=docker  --container-runtime=crio: (1m2.266011369s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-596167
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-596167
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-596167 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-596167 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.02903835s)
helpers_test.go:175: Cleaning up "missing-upgrade-596167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-596167
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-596167: (2.010601188s)
--- PASS: TestMissingContainerUpgrade (110.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-841606 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-841606 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.653258ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-841606] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-841606 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-841606 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.578193369s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-841606 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (114.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-841606 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-841606 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m50.77859356s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-841606 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-841606 status -o json: exit status 2 (453.85489ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-841606","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-841606
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-841606: (3.223978495s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (114.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-841606 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-841606 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.487195201s)
--- PASS: TestNoKubernetes/serial/Start (10.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-841606 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-841606 "sudo systemctl is-active --quiet service kubelet": exit status 1 (374.574896ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-arm64 profile list: (5.900648899s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (6.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-841606
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-841606: (1.208550326s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-841606 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-841606 --driver=docker  --container-runtime=crio: (6.894511077s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-841606 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-841606 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.808031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (54.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1900194004 start -p stopped-upgrade-588244 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1900194004 start -p stopped-upgrade-588244 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.710548334s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1900194004 -p stopped-upgrade-588244 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1900194004 -p stopped-upgrade-588244 stop: (1.302920894s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-588244 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-588244 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.331439992s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (54.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-588244
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-588244: (1.252665925s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                    
x
+
TestPause/serial/Start (95.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-557969 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-557969 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m35.671852713s)
--- PASS: TestPause/serial/Start (95.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-880731 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-880731 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (199.273579ms)

                                                
                                                
-- stdout --
	* [false-880731] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:14:53.694117  468542 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:14:53.694241  468542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:14:53.694251  468542 out.go:374] Setting ErrFile to fd 2...
	I0908 12:14:53.694256  468542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:14:53.694604  468542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-293252/.minikube/bin
	I0908 12:14:53.695120  468542 out.go:368] Setting JSON to false
	I0908 12:14:53.696073  468542 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7046,"bootTime":1757326648,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 12:14:53.696172  468542 start.go:140] virtualization:  
	I0908 12:14:53.700073  468542 out.go:179] * [false-880731] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:14:53.703876  468542 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:14:53.704063  468542 notify.go:220] Checking for updates...
	I0908 12:14:53.710324  468542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:14:53.713195  468542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-293252/kubeconfig
	I0908 12:14:53.716166  468542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-293252/.minikube
	I0908 12:14:53.719201  468542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:14:53.721944  468542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:14:53.725382  468542 config.go:182] Loaded profile config "pause-557969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:14:53.725476  468542 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:14:53.754704  468542 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:14:53.754851  468542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:14:53.825344  468542 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 12:14:53.815501883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:14:53.825449  468542 docker.go:318] overlay module found
	I0908 12:14:53.828550  468542 out.go:179] * Using the docker driver based on user configuration
	I0908 12:14:53.831440  468542 start.go:304] selected driver: docker
	I0908 12:14:53.831461  468542 start.go:918] validating driver "docker" against <nil>
	I0908 12:14:53.831475  468542 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:14:53.835032  468542 out.go:203] 
	W0908 12:14:53.837954  468542 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0908 12:14:53.840773  468542 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-880731 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-880731" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-880731" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21512-293252/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:14:38 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-557969
contexts:
- context:
cluster: pause-557969
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:14:38 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-557969
name: pause-557969
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-557969
user:
client-certificate: /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/pause-557969/client.crt
client-key: /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/pause-557969/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-880731

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880731"

                                                
                                                
----------------------- debugLogs end: false-880731 [took: 3.509195617s] --------------------------------
helpers_test.go:175: Cleaning up "false-880731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-880731
--- PASS: TestNetworkPlugins/group/false (3.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-557969 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-557969 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.076363112s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.11s)

                                                
                                    
x
+
TestPause/serial/Pause (1.1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-557969 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-557969 --alsologtostderr -v=5: (1.101178635s)
--- PASS: TestPause/serial/Pause (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-557969 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-557969 --output=json --layout=cluster: exit status 2 (486.879349ms)

                                                
                                                
-- stdout --
	{"Name":"pause-557969","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-557969","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.08s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-557969 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-557969 --alsologtostderr -v=5: (1.08304043s)
--- PASS: TestPause/serial/Unpause (1.08s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.39s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-557969 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-557969 --alsologtostderr -v=5: (1.38517859s)
--- PASS: TestPause/serial/PauseAgain (1.39s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.16s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-557969 --alsologtostderr -v=5
E0908 12:16:13.603181  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-557969 --alsologtostderr -v=5: (3.163743675s)
--- PASS: TestPause/serial/DeletePaused (3.16s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.98s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-557969
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-557969: exit status 1 (21.829642ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-557969: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-734609 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0908 12:17:25.715284  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-734609 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.055014129s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-734609 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [44105b36-a6a2-454a-b4b4-e1f98694b867] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [44105b36-a6a2-454a-b4b4-e1f98694b867] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00413312s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-734609 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-734609 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-734609 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.127716834s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-734609 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-734609 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-734609 --alsologtostderr -v=3: (11.963551592s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-734609 -n old-k8s-version-734609
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-734609 -n old-k8s-version-734609: exit status 7 (79.19724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-734609 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-734609 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-734609 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (55.13969016s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-734609 -n old-k8s-version-734609
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lfz2n" [b3eb0ff5-a422-4c70-940c-f60d77407f69] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003225321s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lfz2n" [b3eb0ff5-a422-4c70-940c-f60d77407f69] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003847989s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-734609 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-734609 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-734609 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-734609 -n old-k8s-version-734609
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-734609 -n old-k8s-version-734609: exit status 2 (306.879739ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-734609 -n old-k8s-version-734609
E0908 12:19:22.649580  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-734609 -n old-k8s-version-734609: exit status 2 (317.350336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-734609 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-734609 -n old-k8s-version-734609
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-734609 -n old-k8s-version-734609
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (81.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-160533 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-160533 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m21.662189409s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (81.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (91.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-905671 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-905671 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m31.38259616s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (91.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-160533 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2b85c368-55fa-4b84-9efa-cac941188635] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2b85c368-55fa-4b84-9efa-cac941188635] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005936161s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-160533 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-160533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-160533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.035258811s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-160533 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-160533 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-160533 --alsologtostderr -v=3: (11.967437948s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-160533 -n no-preload-160533
E0908 12:21:13.602553  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-160533 -n no-preload-160533: exit status 7 (71.960121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-160533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-160533 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-160533 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (57.744695113s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-160533 -n no-preload-160533
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-905671 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [637af52a-e592-4509-8f20-5f776a7f1395] Pending
helpers_test.go:352: "busybox" [637af52a-e592-4509-8f20-5f776a7f1395] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [637af52a-e592-4509-8f20-5f776a7f1395] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003756661s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-905671 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-905671 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-905671 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.153682674s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-905671 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-905671 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-905671 --alsologtostderr -v=3: (12.254732232s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-905671 -n embed-certs-905671
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-905671 -n embed-certs-905671: exit status 7 (79.86524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-905671 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-905671 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-905671 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (48.997044443s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-905671 -n embed-certs-905671
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bdw4b" [5b354102-3a89-48f2-8141-9c1d79075863] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003915805s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bdw4b" [5b354102-3a89-48f2-8141-9c1d79075863] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00419915s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-160533 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-160533 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-160533 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-160533 -n no-preload-160533
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-160533 -n no-preload-160533: exit status 2 (332.828409ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-160533 -n no-preload-160533
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-160533 -n no-preload-160533: exit status 2 (334.186594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-160533 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-160533 -n no-preload-160533
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-160533 -n no-preload-160533
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-834011 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-834011 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m27.386076673s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m5f4n" [c3055078-b34c-4cf1-aeb8-cc441a70d8f3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005481209s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m5f4n" [c3055078-b34c-4cf1-aeb8-cc441a70d8f3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005903235s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-905671 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-905671 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-905671 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-905671 --alsologtostderr -v=1: (1.09246446s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-905671 -n embed-certs-905671
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-905671 -n embed-certs-905671: exit status 2 (385.468579ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-905671 -n embed-certs-905671
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-905671 -n embed-certs-905671: exit status 2 (411.26936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-905671 --alsologtostderr -v=1
E0908 12:22:50.780383  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:22:50.786711  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:22:50.798047  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:22:50.819414  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:22:50.860729  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:22:50.942250  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:22:51.104092  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-905671 -n embed-certs-905671
E0908 12:22:51.425676  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-905671 -n embed-certs-905671
E0908 12:22:52.067713  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-383396 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 12:23:01.034317  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:11.275896  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:31.757885  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-383396 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (44.847822426s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-383396 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-383396 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-383396 --alsologtostderr -v=3: (1.255766313s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-383396 -n newest-cni-383396
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-383396 -n newest-cni-383396: exit status 7 (75.261235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-383396 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-383396 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-383396 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (17.730246702s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-383396 -n newest-cni-383396
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-834011 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fe98f7fc-9f51-4535-83c6-c459e22053a9] Pending
helpers_test.go:352: "busybox" [fe98f7fc-9f51-4535-83c6-c459e22053a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fe98f7fc-9f51-4535-83c6-c459e22053a9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.006557094s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-834011 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-383396 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-383396 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-383396 -n newest-cni-383396
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-383396 -n newest-cni-383396: exit status 2 (346.147713ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-383396 -n newest-cni-383396
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-383396 -n newest-cni-383396: exit status 2 (337.262558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-383396 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-383396 -n newest-cni-383396
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-383396 -n newest-cni-383396
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m23.668720617s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-834011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-834011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.354027612s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-834011 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-834011 --alsologtostderr -v=3
E0908 12:24:12.719439  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-834011 --alsologtostderr -v=3: (12.03547145s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-834011 -n default-k8s-diff-port-834011
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-834011 -n default-k8s-diff-port-834011: exit status 7 (107.52671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-834011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (61.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-834011 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 12:24:22.649707  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/functional-594147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-834011 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m1.415413874s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-834011 -n default-k8s-diff-port-834011
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (61.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sxnnm" [80168735-9240-4113-b3ae-be1294f21b49] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005411426s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sxnnm" [80168735-9240-4113-b3ae-be1294f21b49] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004230885s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-834011 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-880731 "pgrep -a kubelet"
I0908 12:25:31.412825  295113 config.go:182] Loaded profile config "auto-880731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-880731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mx9lm" [2fbb8d53-441c-4ec4-b259-bfe08c0751d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 12:25:34.641195  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mx9lm" [2fbb8d53-441c-4ec4-b259-bfe08c0751d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003475962s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-834011 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-834011 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-834011 -n default-k8s-diff-port-834011
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-834011 -n default-k8s-diff-port-834011: exit status 2 (335.58157ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-834011 -n default-k8s-diff-port-834011
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-834011 -n default-k8s-diff-port-834011: exit status 2 (337.244414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-834011 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-834011 -n default-k8s-diff-port-834011
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-834011 -n default-k8s-diff-port-834011
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m27.85841655s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-880731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0908 12:26:10.669994  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/no-preload-160533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:26:13.607180  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/addons-953262/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:26:31.152889  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/no-preload-160533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m6.64136607s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-r5d6b" [112ab696-9419-4f6c-8be1-27ce5b7f3d2d] Running
E0908 12:27:12.115406  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/no-preload-160533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003507945s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-bpg2s" [fb9dbb09-6fc4-4db8-a6af-bbf886437acc] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-bpg2s" [fb9dbb09-6fc4-4db8-a6af-bbf886437acc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003773781s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-880731 "pgrep -a kubelet"
I0908 12:27:16.252975  295113 config.go:182] Loaded profile config "kindnet-880731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-880731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wn2kx" [8d1982ae-528f-4f34-a9f5-1917fad5ef0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wn2kx" [8d1982ae-528f-4f34-a9f5-1917fad5ef0d] Running
I0908 12:27:21.699178  295113 config.go:182] Loaded profile config "calico-880731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003046835s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-880731 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-880731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l2nl6" [6584861e-0628-44dc-b20e-5dfaa7d61bdb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l2nl6" [6584861e-0628-44dc-b20e-5dfaa7d61bdb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004258109s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-880731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-880731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m8.653629229s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0908 12:28:18.483130  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/old-k8s-version-734609/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:28:34.037283  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/no-preload-160533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:28:57.534391  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:28:57.540992  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:28:57.552459  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:28:57.573931  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:28:57.615339  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:28:57.696768  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:28:57.858372  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:28:58.180152  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:28:58.821965  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:29:00.103409  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m27.828096278s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-880731 "pgrep -a kubelet"
I0908 12:29:02.243770  295113 config.go:182] Loaded profile config "custom-flannel-880731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-880731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rb9s5" [7791b0da-c368-4bed-8681-c39018b554cc] Pending
E0908 12:29:02.665325  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rb9s5" [7791b0da-c368-4bed-8681-c39018b554cc] Running
E0908 12:29:07.787746  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.002852081s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-880731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-880731 "pgrep -a kubelet"
I0908 12:29:29.496061  295113 config.go:182] Loaded profile config "enable-default-cni-880731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-880731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4txnw" [dc269d42-6549-4545-951e-a20e6df547ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4txnw" [dc269d42-6549-4545-951e-a20e6df547ae] Running
E0908 12:29:38.512146  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.005873377s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m10.025077414s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-880731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0908 12:30:19.473808  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/default-k8s-diff-port-834011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:31.665909  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:31.672297  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:31.683617  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:31.705573  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:31.746984  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:31.828359  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:31.990331  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:32.311608  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:32.953625  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:34.235202  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:36.797462  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:41.919740  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-880731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m15.931576929s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-58mj8" [f62cfae3-4f5e-49f0-b874-f6fd3e1f6cdf] Running
E0908 12:30:50.169981  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/no-preload-160533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:30:52.161108  295113 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/auto-880731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004630434s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-880731 "pgrep -a kubelet"
I0908 12:30:53.183767  295113 config.go:182] Loaded profile config "flannel-880731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-880731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lbvhv" [ddf148b6-fdc4-4e31-967d-004af067e2a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lbvhv" [ddf148b6-fdc4-4e31-967d-004af067e2a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003212557s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-880731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-880731 "pgrep -a kubelet"
I0908 12:31:25.931969  295113 config.go:182] Loaded profile config "bridge-880731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-880731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4w5fm" [95746f27-91f9-42a1-8b79-0e275a917406] Pending
helpers_test.go:352: "netcat-cd4db9dbf-4w5fm" [95746f27-91f9-42a1-8b79-0e275a917406] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003960034s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-880731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-880731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (32/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-212564 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-212564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-212564
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-953262 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-797738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-797738
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-880731 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-880731" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-880731" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21512-293252/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:14:38 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-557969
contexts:
- context:
cluster: pause-557969
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:14:38 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-557969
name: pause-557969
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-557969
user:
client-certificate: /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/pause-557969/client.crt
client-key: /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/pause-557969/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-880731

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880731"

                                                
                                                
----------------------- debugLogs end: kubenet-880731 [took: 3.695502701s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-880731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-880731
--- SKIP: TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-880731 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-880731" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21512-293252/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:14:38 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-557969
contexts:
- context:
cluster: pause-557969
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:14:38 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-557969
name: pause-557969
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-557969
user:
client-certificate: /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/pause-557969/client.crt
client-key: /home/jenkins/minikube-integration/21512-293252/.minikube/profiles/pause-557969/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-880731

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-880731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880731"

                                                
                                                
----------------------- debugLogs end: cilium-880731 [took: 4.603239867s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-880731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-880731
--- SKIP: TestNetworkPlugins/group/cilium (4.76s)

                                                
                                    
Copied to clipboard