Test Report: Docker_Linux_crio 21503

                    
                      0729d8e142017243e3350a16dd07e8c0c152f883:2025-09-08:41331
                    
                

Test fail (6/332)

Order failed test Duration
37 TestAddons/parallel/Ingress 151.58
98 TestFunctional/parallel/ServiceCmdConnect 603.25
144 TestFunctional/parallel/ServiceCmd/DeployApp 600.59
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
154 TestFunctional/parallel/ServiceCmd/Format 0.52
155 TestFunctional/parallel/ServiceCmd/URL 0.53
x
+
TestAddons/parallel/Ingress (151.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-310880 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-310880 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-310880 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [49cf1873-2269-449b-b4e9-463ca76f2fa6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [49cf1873-2269-449b-b4e9-463ca76f2fa6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003918035s
I0908 10:37:29.095586  264164 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-310880 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.022527906s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-310880 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-310880
helpers_test.go:243: (dbg) docker inspect addons-310880:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8ab1d2a445a20842d50f8f0b08b3a0b8e457c91eaf4210d8b64be1af132f3c8c",
	        "Created": "2025-09-08T10:34:33.439759713Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 266053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T10:34:33.479805031Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/8ab1d2a445a20842d50f8f0b08b3a0b8e457c91eaf4210d8b64be1af132f3c8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8ab1d2a445a20842d50f8f0b08b3a0b8e457c91eaf4210d8b64be1af132f3c8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/8ab1d2a445a20842d50f8f0b08b3a0b8e457c91eaf4210d8b64be1af132f3c8c/hosts",
	        "LogPath": "/var/lib/docker/containers/8ab1d2a445a20842d50f8f0b08b3a0b8e457c91eaf4210d8b64be1af132f3c8c/8ab1d2a445a20842d50f8f0b08b3a0b8e457c91eaf4210d8b64be1af132f3c8c-json.log",
	        "Name": "/addons-310880",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-310880:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-310880",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8ab1d2a445a20842d50f8f0b08b3a0b8e457c91eaf4210d8b64be1af132f3c8c",
	                "LowerDir": "/var/lib/docker/overlay2/f0c4c3239a578104579ed064679d5a755223751d01ed83f4500fc36450b04c33-init/diff:/var/lib/docker/overlay2/42ba3aa56f0a82ca44fc0cd64f44c2376737b78d7d73ce4114d5dbec5843e84a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f0c4c3239a578104579ed064679d5a755223751d01ed83f4500fc36450b04c33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f0c4c3239a578104579ed064679d5a755223751d01ed83f4500fc36450b04c33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f0c4c3239a578104579ed064679d5a755223751d01ed83f4500fc36450b04c33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-310880",
	                "Source": "/var/lib/docker/volumes/addons-310880/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-310880",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-310880",
	                "name.minikube.sigs.k8s.io": "addons-310880",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "762c1a989ed093a85a4376baea7ac912dea742861a0c3bb63d5fbddf972fa770",
	            "SandboxKey": "/var/run/docker/netns/762c1a989ed0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-310880": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d3:03:c8:b9:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "309f4f0c1a08d225c14e68f30a6343ce3ff8a05e0249dac8fe999d54c63aef71",
	                    "EndpointID": "da29ced9332222f9351b87cf2d0e0e6b1702a6cc16e67c7e2e99a0addc991dcb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-310880",
	                        "8ab1d2a445a2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-310880 -n addons-310880
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-310880 logs -n 25: (1.264010354s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-303365 --alsologtostderr --binary-mirror http://127.0.0.1:38671 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-303365 │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │                     │
	│ delete  │ -p binary-mirror-303365                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-303365 │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ addons  │ disable dashboard -p addons-310880                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │                     │
	│ addons  │ enable dashboard -p addons-310880                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │                     │
	│ start   │ -p addons-310880 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:37 UTC │
	│ addons  │ addons-310880 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ addons  │ addons-310880 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ addons  │ enable headlamp -p addons-310880 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ addons  │ addons-310880 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ ssh     │ addons-310880 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │                     │
	│ addons  │ addons-310880 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ ip      │ addons-310880 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ addons  │ addons-310880 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-310880                                                                                                                                                                                                                                                                                                                                                                                           │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ addons  │ addons-310880 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ addons  │ addons-310880 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ addons  │ addons-310880 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ addons  │ addons-310880 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:37 UTC │ 08 Sep 25 10:37 UTC │
	│ ssh     │ addons-310880 ssh cat /opt/local-path-provisioner/pvc-83775afe-2b66-4d94-a207-eeb453c0c82a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:38 UTC │ 08 Sep 25 10:38 UTC │
	│ addons  │ addons-310880 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:38 UTC │ 08 Sep 25 10:38 UTC │
	│ addons  │ addons-310880 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:38 UTC │ 08 Sep 25 10:38 UTC │
	│ addons  │ addons-310880 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:38 UTC │ 08 Sep 25 10:38 UTC │
	│ addons  │ addons-310880 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:38 UTC │ 08 Sep 25 10:38 UTC │
	│ addons  │ addons-310880 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:38 UTC │ 08 Sep 25 10:38 UTC │
	│ ip      │ addons-310880 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-310880        │ jenkins │ v1.36.0 │ 08 Sep 25 10:39 UTC │ 08 Sep 25 10:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 10:34:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 10:34:11.868537  265442 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:34:11.868829  265442 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:34:11.868841  265442 out.go:374] Setting ErrFile to fd 2...
	I0908 10:34:11.868845  265442 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:34:11.869076  265442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
	I0908 10:34:11.869799  265442 out.go:368] Setting JSON to false
	I0908 10:34:11.870688  265442 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4596,"bootTime":1757323056,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:34:11.870812  265442 start.go:140] virtualization: kvm guest
	I0908 10:34:11.872913  265442 out.go:179] * [addons-310880] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 10:34:11.874213  265442 notify.go:220] Checking for updates...
	I0908 10:34:11.874228  265442 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 10:34:11.875915  265442 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:34:11.877324  265442 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	I0908 10:34:11.878853  265442 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	I0908 10:34:11.880334  265442 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 10:34:11.881625  265442 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 10:34:11.883248  265442 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:34:11.906072  265442 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 10:34:11.906163  265442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 10:34:11.954711  265442 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-09-08 10:34:11.945725778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 10:34:11.954820  265442 docker.go:318] overlay module found
	I0908 10:34:11.956730  265442 out.go:179] * Using the docker driver based on user configuration
	I0908 10:34:11.957917  265442 start.go:304] selected driver: docker
	I0908 10:34:11.957931  265442 start.go:918] validating driver "docker" against <nil>
	I0908 10:34:11.957943  265442 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 10:34:11.958858  265442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 10:34:12.013062  265442 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-09-08 10:34:12.003312896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 10:34:12.013265  265442 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 10:34:12.013489  265442 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 10:34:12.015158  265442 out.go:179] * Using Docker driver with root privileges
	I0908 10:34:12.016502  265442 cni.go:84] Creating CNI manager for ""
	I0908 10:34:12.016589  265442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 10:34:12.016604  265442 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 10:34:12.016707  265442 start.go:348] cluster config:
	{Name:addons-310880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-310880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0908 10:34:12.018275  265442 out.go:179] * Starting "addons-310880" primary control-plane node in "addons-310880" cluster
	I0908 10:34:12.019498  265442 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 10:34:12.021001  265442 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 10:34:12.022216  265442 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 10:34:12.022273  265442 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21503-260352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 10:34:12.022268  265442 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 10:34:12.022288  265442 cache.go:58] Caching tarball of preloaded images
	I0908 10:34:12.022422  265442 preload.go:172] Found /home/jenkins/minikube-integration/21503-260352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 10:34:12.022450  265442 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 10:34:12.022821  265442 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/config.json ...
	I0908 10:34:12.022851  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/config.json: {Name:mkca5709698f597798dbfcb0dc6decf2f86e626d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:12.039103  265442 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 10:34:12.039249  265442 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 10:34:12.039267  265442 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 10:34:12.039271  265442 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 10:34:12.039279  265442 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 10:34:12.039287  265442 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from local cache
	I0908 10:34:24.548546  265442 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from cached tarball
	I0908 10:34:24.548595  265442 cache.go:232] Successfully downloaded all kic artifacts
	I0908 10:34:24.548652  265442 start.go:360] acquireMachinesLock for addons-310880: {Name:mk5c4e4c9933e0a067fdb5798601ee53f3eb8686 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 10:34:24.548785  265442 start.go:364] duration metric: took 102.996µs to acquireMachinesLock for "addons-310880"
	I0908 10:34:24.548816  265442 start.go:93] Provisioning new machine with config: &{Name:addons-310880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-310880 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 10:34:24.548919  265442 start.go:125] createHost starting for "" (driver="docker")
	I0908 10:34:24.550783  265442 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0908 10:34:24.551083  265442 start.go:159] libmachine.API.Create for "addons-310880" (driver="docker")
	I0908 10:34:24.551123  265442 client.go:168] LocalClient.Create starting
	I0908 10:34:24.551265  265442 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21503-260352/.minikube/certs/ca.pem
	I0908 10:34:24.713362  265442 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21503-260352/.minikube/certs/cert.pem
	I0908 10:34:24.829962  265442 cli_runner.go:164] Run: docker network inspect addons-310880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 10:34:24.846814  265442 cli_runner.go:211] docker network inspect addons-310880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 10:34:24.846914  265442 network_create.go:284] running [docker network inspect addons-310880] to gather additional debugging logs...
	I0908 10:34:24.846934  265442 cli_runner.go:164] Run: docker network inspect addons-310880
	W0908 10:34:24.862761  265442 cli_runner.go:211] docker network inspect addons-310880 returned with exit code 1
	I0908 10:34:24.862791  265442 network_create.go:287] error running [docker network inspect addons-310880]: docker network inspect addons-310880: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-310880 not found
	I0908 10:34:24.862818  265442 network_create.go:289] output of [docker network inspect addons-310880]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-310880 not found
	
	** /stderr **
	I0908 10:34:24.862922  265442 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 10:34:24.879419  265442 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001be1d90}
	I0908 10:34:24.879457  265442 network_create.go:124] attempt to create docker network addons-310880 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0908 10:34:24.879497  265442 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-310880 addons-310880
	I0908 10:34:24.935610  265442 network_create.go:108] docker network addons-310880 192.168.49.0/24 created
	I0908 10:34:24.935644  265442 kic.go:121] calculated static IP "192.168.49.2" for the "addons-310880" container
	I0908 10:34:24.935724  265442 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 10:34:24.952307  265442 cli_runner.go:164] Run: docker volume create addons-310880 --label name.minikube.sigs.k8s.io=addons-310880 --label created_by.minikube.sigs.k8s.io=true
	I0908 10:34:24.973110  265442 oci.go:103] Successfully created a docker volume addons-310880
	I0908 10:34:24.973218  265442 cli_runner.go:164] Run: docker run --rm --name addons-310880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-310880 --entrypoint /usr/bin/test -v addons-310880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 10:34:28.826260  265442 cli_runner.go:217] Completed: docker run --rm --name addons-310880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-310880 --entrypoint /usr/bin/test -v addons-310880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib: (3.852998287s)
	I0908 10:34:28.826296  265442 oci.go:107] Successfully prepared a docker volume addons-310880
	I0908 10:34:28.826339  265442 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 10:34:28.826370  265442 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 10:34:28.826443  265442 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21503-260352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-310880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 10:34:33.370712  265442 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21503-260352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-310880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.544208996s)
	I0908 10:34:33.370747  265442 kic.go:203] duration metric: took 4.544374481s to extract preloaded images to volume ...
	W0908 10:34:33.370883  265442 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 10:34:33.370980  265442 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 10:34:33.421965  265442 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-310880 --name addons-310880 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-310880 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-310880 --network addons-310880 --ip 192.168.49.2 --volume addons-310880:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 10:34:33.735606  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Running}}
	I0908 10:34:33.754910  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:33.775272  265442 cli_runner.go:164] Run: docker exec addons-310880 stat /var/lib/dpkg/alternatives/iptables
	I0908 10:34:33.828586  265442 oci.go:144] the created container "addons-310880" has a running status.
	I0908 10:34:33.828627  265442 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa...
	I0908 10:34:34.014752  265442 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 10:34:34.038104  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:34.057551  265442 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 10:34:34.057575  265442 kic_runner.go:114] Args: [docker exec --privileged addons-310880 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 10:34:34.105844  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:34.131014  265442 machine.go:93] provisionDockerMachine start ...
	I0908 10:34:34.131160  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:34.149052  265442 main.go:141] libmachine: Using SSH client type: native
	I0908 10:34:34.149359  265442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0908 10:34:34.149377  265442 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 10:34:34.150240  265442 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39906->127.0.0.1:32768: read: connection reset by peer
	I0908 10:34:37.271361  265442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-310880
	
	I0908 10:34:37.271418  265442 ubuntu.go:182] provisioning hostname "addons-310880"
	I0908 10:34:37.271493  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:37.289910  265442 main.go:141] libmachine: Using SSH client type: native
	I0908 10:34:37.290194  265442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0908 10:34:37.290213  265442 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-310880 && echo "addons-310880" | sudo tee /etc/hostname
	I0908 10:34:37.419968  265442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-310880
	
	I0908 10:34:37.420046  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:37.438041  265442 main.go:141] libmachine: Using SSH client type: native
	I0908 10:34:37.438323  265442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0908 10:34:37.438351  265442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-310880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-310880/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-310880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 10:34:37.560321  265442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 10:34:37.560358  265442 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21503-260352/.minikube CaCertPath:/home/jenkins/minikube-integration/21503-260352/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21503-260352/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21503-260352/.minikube}
	I0908 10:34:37.560386  265442 ubuntu.go:190] setting up certificates
	I0908 10:34:37.560402  265442 provision.go:84] configureAuth start
	I0908 10:34:37.560492  265442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-310880
	I0908 10:34:37.581317  265442 provision.go:143] copyHostCerts
	I0908 10:34:37.581406  265442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-260352/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21503-260352/.minikube/ca.pem (1082 bytes)
	I0908 10:34:37.581521  265442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-260352/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21503-260352/.minikube/cert.pem (1123 bytes)
	I0908 10:34:37.581581  265442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21503-260352/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21503-260352/.minikube/key.pem (1675 bytes)
	I0908 10:34:37.581629  265442 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21503-260352/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21503-260352/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21503-260352/.minikube/certs/ca-key.pem org=jenkins.addons-310880 san=[127.0.0.1 192.168.49.2 addons-310880 localhost minikube]
	I0908 10:34:37.715830  265442 provision.go:177] copyRemoteCerts
	I0908 10:34:37.715906  265442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 10:34:37.715946  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:37.734822  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:37.829241  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 10:34:37.854182  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 10:34:37.879010  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 10:34:37.903776  265442 provision.go:87] duration metric: took 343.348176ms to configureAuth
	I0908 10:34:37.903808  265442 ubuntu.go:206] setting minikube options for container-runtime
	I0908 10:34:37.904025  265442 config.go:182] Loaded profile config "addons-310880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:34:37.904159  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:37.921333  265442 main.go:141] libmachine: Using SSH client type: native
	I0908 10:34:37.921577  265442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0908 10:34:37.921600  265442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 10:34:38.144138  265442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 10:34:38.144171  265442 machine.go:96] duration metric: took 4.013129551s to provisionDockerMachine
	I0908 10:34:38.144184  265442 client.go:171] duration metric: took 13.593045319s to LocalClient.Create
	I0908 10:34:38.144210  265442 start.go:167] duration metric: took 13.593130865s to libmachine.API.Create "addons-310880"
	I0908 10:34:38.144224  265442 start.go:293] postStartSetup for "addons-310880" (driver="docker")
	I0908 10:34:38.144240  265442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 10:34:38.144351  265442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 10:34:38.144418  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:38.163292  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:38.253812  265442 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 10:34:38.257494  265442 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 10:34:38.257524  265442 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 10:34:38.257531  265442 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 10:34:38.257538  265442 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 10:34:38.257550  265442 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-260352/.minikube/addons for local assets ...
	I0908 10:34:38.257628  265442 filesync.go:126] Scanning /home/jenkins/minikube-integration/21503-260352/.minikube/files for local assets ...
	I0908 10:34:38.257656  265442 start.go:296] duration metric: took 113.423523ms for postStartSetup
	I0908 10:34:38.257995  265442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-310880
	I0908 10:34:38.276758  265442 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/config.json ...
	I0908 10:34:38.277120  265442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 10:34:38.277173  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:38.295939  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:38.384879  265442 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 10:34:38.389479  265442 start.go:128] duration metric: took 13.840539955s to createHost
	I0908 10:34:38.389513  265442 start.go:83] releasing machines lock for "addons-310880", held for 13.840713342s
	I0908 10:34:38.389591  265442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-310880
	I0908 10:34:38.406894  265442 ssh_runner.go:195] Run: cat /version.json
	I0908 10:34:38.406956  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:38.406988  265442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 10:34:38.407063  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:38.426422  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:38.427533  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:38.581934  265442 ssh_runner.go:195] Run: systemctl --version
	I0908 10:34:38.586215  265442 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 10:34:38.726299  265442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 10:34:38.730863  265442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 10:34:38.750625  265442 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 10:34:38.750722  265442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 10:34:38.779855  265442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 10:34:38.779888  265442 start.go:495] detecting cgroup driver to use...
	I0908 10:34:38.779930  265442 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 10:34:38.779984  265442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 10:34:38.795072  265442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 10:34:38.806172  265442 docker.go:218] disabling cri-docker service (if available) ...
	I0908 10:34:38.806239  265442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 10:34:38.819280  265442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 10:34:38.833390  265442 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 10:34:38.910761  265442 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 10:34:38.990669  265442 docker.go:234] disabling docker service ...
	I0908 10:34:38.990755  265442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 10:34:39.010214  265442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 10:34:39.021522  265442 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 10:34:39.102550  265442 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 10:34:39.191030  265442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 10:34:39.202720  265442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 10:34:39.219559  265442 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 10:34:39.219616  265442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:34:39.230161  265442 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 10:34:39.230229  265442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:34:39.240955  265442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:34:39.251108  265442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:34:39.261877  265442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 10:34:39.271754  265442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:34:39.281920  265442 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:34:39.297866  265442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 10:34:39.307328  265442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 10:34:39.315514  265442 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 10:34:39.315580  265442 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 10:34:39.329482  265442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 10:34:39.339235  265442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 10:34:39.412738  265442 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 10:34:39.515978  265442 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 10:34:39.516119  265442 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 10:34:39.519721  265442 start.go:563] Will wait 60s for crictl version
	I0908 10:34:39.519787  265442 ssh_runner.go:195] Run: which crictl
	I0908 10:34:39.523093  265442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 10:34:39.560123  265442 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 10:34:39.560265  265442 ssh_runner.go:195] Run: crio --version
	I0908 10:34:39.596442  265442 ssh_runner.go:195] Run: crio --version
	I0908 10:34:39.633578  265442 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 10:34:39.634898  265442 cli_runner.go:164] Run: docker network inspect addons-310880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 10:34:39.651273  265442 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 10:34:39.655024  265442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 10:34:39.665710  265442 kubeadm.go:875] updating cluster {Name:addons-310880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-310880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 10:34:39.665821  265442 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 10:34:39.665866  265442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 10:34:39.733963  265442 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 10:34:39.733988  265442 crio.go:433] Images already preloaded, skipping extraction
	I0908 10:34:39.734040  265442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 10:34:39.768255  265442 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 10:34:39.768277  265442 cache_images.go:85] Images are preloaded, skipping loading
	I0908 10:34:39.768286  265442 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0908 10:34:39.768399  265442 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-310880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-310880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 10:34:39.768495  265442 ssh_runner.go:195] Run: crio config
	I0908 10:34:39.811028  265442 cni.go:84] Creating CNI manager for ""
	I0908 10:34:39.811062  265442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 10:34:39.811075  265442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 10:34:39.811099  265442 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-310880 NodeName:addons-310880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 10:34:39.811237  265442 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-310880"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 10:34:39.811300  265442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 10:34:39.820035  265442 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 10:34:39.820092  265442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 10:34:39.828608  265442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0908 10:34:39.846322  265442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 10:34:39.863771  265442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 10:34:39.881154  265442 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 10:34:39.884756  265442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 10:34:39.895775  265442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 10:34:39.977136  265442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 10:34:39.989751  265442 certs.go:68] Setting up /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880 for IP: 192.168.49.2
	I0908 10:34:39.989773  265442 certs.go:194] generating shared ca certs ...
	I0908 10:34:39.989790  265442 certs.go:226] acquiring lock for ca certs: {Name:mkfb1871b737687266cbfcb16d6e349dd72a91b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:39.989931  265442 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21503-260352/.minikube/ca.key
	I0908 10:34:40.386097  265442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21503-260352/.minikube/ca.crt ...
	I0908 10:34:40.386133  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/ca.crt: {Name:mk9efcde02fb178a4393351ecddda24c3eb2e7e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:40.386318  265442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21503-260352/.minikube/ca.key ...
	I0908 10:34:40.386331  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/ca.key: {Name:mkec22f3fe9de26776030fdadaccd7e6c616e8e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:40.386407  265442 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21503-260352/.minikube/proxy-client-ca.key
	I0908 10:34:40.498414  265442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21503-260352/.minikube/proxy-client-ca.crt ...
	I0908 10:34:40.498445  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/proxy-client-ca.crt: {Name:mk47ca7354e56a94bb09c04855f3ae0f3ce9b734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:40.498605  265442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21503-260352/.minikube/proxy-client-ca.key ...
	I0908 10:34:40.498618  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/proxy-client-ca.key: {Name:mk5a343dea1daef2dc9d14c6f39cdd7996c71dfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:40.498689  265442 certs.go:256] generating profile certs ...
	I0908 10:34:40.498749  265442 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.key
	I0908 10:34:40.498763  265442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt with IP's: []
	I0908 10:34:40.719136  265442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt ...
	I0908 10:34:40.719172  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: {Name:mk44fe6c255822f5a42894bf3d54606993e29424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:40.719345  265442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.key ...
	I0908 10:34:40.719355  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.key: {Name:mk82fc0f074524253d07db8d512e312e5c079a89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:40.719425  265442 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.key.96efed89
	I0908 10:34:40.719443  265442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.crt.96efed89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0908 10:34:41.357467  265442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.crt.96efed89 ...
	I0908 10:34:41.357503  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.crt.96efed89: {Name:mk0d5543fe54473958207157496e19880ba912f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:41.357673  265442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.key.96efed89 ...
	I0908 10:34:41.357686  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.key.96efed89: {Name:mke1ac86dc63dfd03bb1dfdec56479c6a2c2758d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:41.357760  265442 certs.go:381] copying /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.crt.96efed89 -> /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.crt
	I0908 10:34:41.357863  265442 certs.go:385] copying /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.key.96efed89 -> /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.key
	I0908 10:34:41.357918  265442 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/proxy-client.key
	I0908 10:34:41.357937  265442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/proxy-client.crt with IP's: []
	I0908 10:34:41.417623  265442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/proxy-client.crt ...
	I0908 10:34:41.417655  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/proxy-client.crt: {Name:mkf66956706394641849f839a3e350d5077d57be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:41.417816  265442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/proxy-client.key ...
	I0908 10:34:41.417830  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/proxy-client.key: {Name:mk1240439074543f9eb84b5e329d4a165e12d7a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:41.417989  265442 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-260352/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 10:34:41.418027  265442 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-260352/.minikube/certs/ca.pem (1082 bytes)
	I0908 10:34:41.418051  265442 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-260352/.minikube/certs/cert.pem (1123 bytes)
	I0908 10:34:41.418075  265442 certs.go:484] found cert: /home/jenkins/minikube-integration/21503-260352/.minikube/certs/key.pem (1675 bytes)
	I0908 10:34:41.418627  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 10:34:41.443736  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 10:34:41.468772  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 10:34:41.494155  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 10:34:41.519789  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 10:34:41.545938  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 10:34:41.571526  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 10:34:41.595853  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 10:34:41.621985  265442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21503-260352/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 10:34:41.647104  265442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 10:34:41.665405  265442 ssh_runner.go:195] Run: openssl version
	I0908 10:34:41.670962  265442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 10:34:41.680767  265442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 10:34:41.684537  265442 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0908 10:34:41.684606  265442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 10:34:41.691438  265442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 10:34:41.700689  265442 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 10:34:41.704059  265442 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 10:34:41.704108  265442 kubeadm.go:392] StartCluster: {Name:addons-310880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-310880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:34:41.704175  265442 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 10:34:41.704239  265442 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 10:34:41.740200  265442 cri.go:89] found id: ""
	I0908 10:34:41.740275  265442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 10:34:41.748881  265442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 10:34:41.757420  265442 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 10:34:41.757488  265442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 10:34:41.765852  265442 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 10:34:41.765874  265442 kubeadm.go:157] found existing configuration files:
	
	I0908 10:34:41.765929  265442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 10:34:41.774391  265442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 10:34:41.774461  265442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 10:34:41.782689  265442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 10:34:41.791563  265442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 10:34:41.791621  265442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 10:34:41.799898  265442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 10:34:41.808430  265442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 10:34:41.808504  265442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 10:34:41.817243  265442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 10:34:41.826230  265442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 10:34:41.826308  265442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 10:34:41.834937  265442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 10:34:41.876036  265442 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 10:34:41.876130  265442 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 10:34:41.890617  265442 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 10:34:41.890711  265442 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0908 10:34:41.890749  265442 kubeadm.go:310] OS: Linux
	I0908 10:34:41.890834  265442 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 10:34:41.890908  265442 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 10:34:41.890982  265442 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 10:34:41.891065  265442 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 10:34:41.891147  265442 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 10:34:41.891237  265442 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 10:34:41.891349  265442 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 10:34:41.891395  265442 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 10:34:41.891470  265442 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 10:34:41.940954  265442 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 10:34:41.941108  265442 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 10:34:41.941268  265442 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 10:34:41.948303  265442 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 10:34:41.950185  265442 out.go:252]   - Generating certificates and keys ...
	I0908 10:34:41.950270  265442 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 10:34:41.950384  265442 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 10:34:42.294188  265442 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 10:34:42.407191  265442 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 10:34:42.766122  265442 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 10:34:42.938505  265442 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 10:34:43.452773  265442 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 10:34:43.452935  265442 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-310880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 10:34:43.640757  265442 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 10:34:43.640920  265442 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-310880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 10:34:44.048554  265442 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 10:34:44.617919  265442 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 10:34:44.773794  265442 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 10:34:44.773881  265442 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 10:34:45.198456  265442 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 10:34:45.305017  265442 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 10:34:45.484714  265442 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 10:34:45.594537  265442 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 10:34:45.961643  265442 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 10:34:45.961950  265442 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 10:34:45.964308  265442 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 10:34:45.966392  265442 out.go:252]   - Booting up control plane ...
	I0908 10:34:45.966495  265442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 10:34:45.966609  265442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 10:34:45.966713  265442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 10:34:45.978183  265442 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 10:34:45.978312  265442 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 10:34:45.984224  265442 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 10:34:45.984459  265442 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 10:34:45.984531  265442 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 10:34:46.065426  265442 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 10:34:46.065590  265442 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 10:34:46.567177  265442 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.877945ms
	I0908 10:34:46.569955  265442 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 10:34:46.570085  265442 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0908 10:34:46.570216  265442 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 10:34:46.570294  265442 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 10:34:50.297458  265442 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.727381279s
	I0908 10:34:50.736965  265442 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.166915823s
	I0908 10:34:52.071590  265442 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.501507747s
	I0908 10:34:52.083412  265442 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 10:34:52.097817  265442 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 10:34:52.112382  265442 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 10:34:52.112645  265442 kubeadm.go:310] [mark-control-plane] Marking the node addons-310880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 10:34:52.122155  265442 kubeadm.go:310] [bootstrap-token] Using token: 4hbgsa.2cetc4y02vrd4tpb
	I0908 10:34:52.123799  265442 out.go:252]   - Configuring RBAC rules ...
	I0908 10:34:52.123961  265442 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 10:34:52.127999  265442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 10:34:52.136650  265442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 10:34:52.141268  265442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 10:34:52.144740  265442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 10:34:52.148024  265442 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 10:34:52.478245  265442 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 10:34:52.899256  265442 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 10:34:53.478799  265442 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 10:34:53.479731  265442 kubeadm.go:310] 
	I0908 10:34:53.479820  265442 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 10:34:53.479833  265442 kubeadm.go:310] 
	I0908 10:34:53.479926  265442 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 10:34:53.479949  265442 kubeadm.go:310] 
	I0908 10:34:53.479985  265442 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 10:34:53.480043  265442 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 10:34:53.480088  265442 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 10:34:53.480113  265442 kubeadm.go:310] 
	I0908 10:34:53.480210  265442 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 10:34:53.480221  265442 kubeadm.go:310] 
	I0908 10:34:53.480285  265442 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 10:34:53.480295  265442 kubeadm.go:310] 
	I0908 10:34:53.480375  265442 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 10:34:53.480476  265442 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 10:34:53.480558  265442 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 10:34:53.480572  265442 kubeadm.go:310] 
	I0908 10:34:53.480650  265442 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 10:34:53.480769  265442 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 10:34:53.480781  265442 kubeadm.go:310] 
	I0908 10:34:53.480902  265442 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4hbgsa.2cetc4y02vrd4tpb \
	I0908 10:34:53.481056  265442 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b69105a32c727bab5d92e9251bf1c02f0019a485008030bca05a550d8e27acac \
	I0908 10:34:53.481094  265442 kubeadm.go:310] 	--control-plane 
	I0908 10:34:53.481102  265442 kubeadm.go:310] 
	I0908 10:34:53.481204  265442 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 10:34:53.481213  265442 kubeadm.go:310] 
	I0908 10:34:53.481333  265442 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4hbgsa.2cetc4y02vrd4tpb \
	I0908 10:34:53.481475  265442 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b69105a32c727bab5d92e9251bf1c02f0019a485008030bca05a550d8e27acac 
	I0908 10:34:53.484170  265442 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 10:34:53.484428  265442 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0908 10:34:53.484529  265442 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 10:34:53.484557  265442 cni.go:84] Creating CNI manager for ""
	I0908 10:34:53.484567  265442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 10:34:53.486326  265442 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 10:34:53.487430  265442 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 10:34:53.491294  265442 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 10:34:53.491312  265442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 10:34:53.509588  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 10:34:53.744499  265442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 10:34:53.744578  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:34:53.744593  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-310880 minikube.k8s.io/updated_at=2025_09_08T10_34_53_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=9b5c9e357ec605e3f7a3fbfd5f3e59fa37db6ba2 minikube.k8s.io/name=addons-310880 minikube.k8s.io/primary=true
	I0908 10:34:53.919452  265442 ops.go:34] apiserver oom_adj: -16
	I0908 10:34:53.919617  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:34:54.420007  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:34:54.919809  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:34:55.420291  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:34:55.920712  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:34:56.420350  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:34:56.920335  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:34:57.420656  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:34:57.920311  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:34:58.420649  265442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 10:34:58.487572  265442 kubeadm.go:1105] duration metric: took 4.743068662s to wait for elevateKubeSystemPrivileges
	I0908 10:34:58.487612  265442 kubeadm.go:394] duration metric: took 16.783510224s to StartCluster
	I0908 10:34:58.487635  265442 settings.go:142] acquiring lock: {Name:mk43a7e8c68b203630501ffc08f1fd9afeae1c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:58.487801  265442 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21503-260352/kubeconfig
	I0908 10:34:58.488544  265442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/kubeconfig: {Name:mk36598e51cd7acd7cf65e2a2b0f2005e49d80bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:58.489360  265442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 10:34:58.489389  265442 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 10:34:58.489448  265442 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 10:34:58.489614  265442 addons.go:69] Setting cloud-spanner=true in profile "addons-310880"
	I0908 10:34:58.489632  265442 addons.go:69] Setting storage-provisioner=true in profile "addons-310880"
	I0908 10:34:58.489631  265442 addons.go:69] Setting inspektor-gadget=true in profile "addons-310880"
	I0908 10:34:58.489647  265442 addons.go:238] Setting addon cloud-spanner=true in "addons-310880"
	I0908 10:34:58.489653  265442 addons.go:238] Setting addon inspektor-gadget=true in "addons-310880"
	I0908 10:34:58.489654  265442 config.go:182] Loaded profile config "addons-310880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:34:58.489666  265442 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-310880"
	I0908 10:34:58.489672  265442 addons.go:69] Setting gcp-auth=true in profile "addons-310880"
	I0908 10:34:58.489689  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.489659  265442 addons.go:238] Setting addon storage-provisioner=true in "addons-310880"
	I0908 10:34:58.489709  265442 addons.go:69] Setting volumesnapshots=true in profile "addons-310880"
	I0908 10:34:58.489714  265442 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-310880"
	I0908 10:34:58.489720  265442 mustload.go:65] Loading cluster: addons-310880
	I0908 10:34:58.489734  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.489742  265442 addons.go:238] Setting addon volumesnapshots=true in "addons-310880"
	I0908 10:34:58.489756  265442 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-310880"
	I0908 10:34:58.489771  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.489777  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.489766  265442 addons.go:69] Setting registry-creds=true in profile "addons-310880"
	I0908 10:34:58.489807  265442 addons.go:238] Setting addon registry-creds=true in "addons-310880"
	I0908 10:34:58.489867  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.489911  265442 addons.go:69] Setting ingress=true in profile "addons-310880"
	I0908 10:34:58.489977  265442 addons.go:238] Setting addon ingress=true in "addons-310880"
	I0908 10:34:58.489996  265442 config.go:182] Loaded profile config "addons-310880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:34:58.490035  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.490259  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.490273  265442 addons.go:69] Setting ingress-dns=true in profile "addons-310880"
	I0908 10:34:58.490279  265442 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-310880"
	I0908 10:34:58.490283  265442 addons.go:69] Setting metrics-server=true in profile "addons-310880"
	I0908 10:34:58.490285  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.490297  265442 addons.go:238] Setting addon metrics-server=true in "addons-310880"
	I0908 10:34:58.490299  265442 addons.go:69] Setting default-storageclass=true in profile "addons-310880"
	I0908 10:34:58.490320  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.490337  265442 addons.go:69] Setting registry=true in profile "addons-310880"
	I0908 10:34:58.490347  265442 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-310880"
	I0908 10:34:58.490358  265442 addons.go:238] Setting addon registry=true in "addons-310880"
	I0908 10:34:58.490374  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.490381  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.490397  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.490692  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.490835  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.490851  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.490855  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.489699  265442 addons.go:69] Setting volcano=true in profile "addons-310880"
	I0908 10:34:58.490275  265442 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-310880"
	I0908 10:34:58.491251  265442 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-310880"
	I0908 10:34:58.491293  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.489706  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.490260  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.491257  265442 addons.go:238] Setting addon volcano=true in "addons-310880"
	I0908 10:34:58.491639  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.490320  265442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-310880"
	I0908 10:34:58.490265  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.490259  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.489621  265442 addons.go:69] Setting yakd=true in profile "addons-310880"
	I0908 10:34:58.492033  265442 addons.go:238] Setting addon yakd=true in "addons-310880"
	I0908 10:34:58.492092  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.489684  265442 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-310880"
	I0908 10:34:58.492611  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.492614  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.496563  265442 out.go:179] * Verifying Kubernetes components...
	I0908 10:34:58.490288  265442 addons.go:238] Setting addon ingress-dns=true in "addons-310880"
	I0908 10:34:58.497344  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.497918  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.498358  265442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 10:34:58.512507  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.516424  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.518018  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.519555  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.526950  265442 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0908 10:34:58.528638  265442 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 10:34:58.528663  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 10:34:58.528723  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.530629  265442 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 10:34:58.531731  265442 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 10:34:58.531753  265442 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 10:34:58.531830  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.531835  265442 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 10:34:58.535821  265442 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 10:34:58.535849  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 10:34:58.535921  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.541344  265442 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 10:34:58.542599  265442 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 10:34:58.543833  265442 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 10:34:58.543857  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 10:34:58.543936  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	W0908 10:34:58.557378  265442 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 10:34:58.558241  265442 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 10:34:58.558306  265442 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 10:34:58.558329  265442 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 10:34:58.559805  265442 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 10:34:58.559921  265442 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 10:34:58.560359  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 10:34:58.560462  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.560102  265442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 10:34:58.560780  265442 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 10:34:58.560846  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.561990  265442 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 10:34:58.562010  265442 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 10:34:58.562078  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.562683  265442 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 10:34:58.563811  265442 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 10:34:58.565030  265442 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 10:34:58.565053  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 10:34:58.565117  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.582008  265442 addons.go:238] Setting addon default-storageclass=true in "addons-310880"
	I0908 10:34:58.582071  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.582804  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.589725  265442 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 10:34:58.591069  265442 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 10:34:58.591093  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 10:34:58.591177  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.608580  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.615093  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.618225  265442 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0908 10:34:58.619569  265442 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 10:34:58.619754  265442 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 10:34:58.619779  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 10:34:58.619860  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.621844  265442 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 10:34:58.622061  265442 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 10:34:58.622143  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.625190  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.627905  265442 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 10:34:58.632767  265442 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 10:34:58.632793  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 10:34:58.632881  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.644561  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.645103  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.649911  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.650021  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.650610  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.651269  265442 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-310880"
	I0908 10:34:58.651312  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:34:58.651620  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:34:58.655314  265442 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 10:34:58.656712  265442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 10:34:58.657999  265442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 10:34:58.659267  265442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 10:34:58.660366  265442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 10:34:58.661587  265442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 10:34:58.662808  265442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 10:34:58.663908  265442 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 10:34:58.664049  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.665340  265442 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 10:34:58.665336  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.665413  265442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 10:34:58.665508  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.670611  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.673279  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.673501  265442 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 10:34:58.673519  265442 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 10:34:58.673580  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.674784  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.686181  265442 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 10:34:58.687357  265442 out.go:179]   - Using image docker.io/busybox:stable
	I0908 10:34:58.688402  265442 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 10:34:58.688423  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 10:34:58.688493  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:34:58.698131  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	W0908 10:34:58.700446  265442 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 10:34:58.700490  265442 retry.go:31] will retry after 246.876666ms: ssh: handshake failed: EOF
	I0908 10:34:58.718609  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:34:58.719024  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	W0908 10:34:58.776428  265442 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 10:34:58.776465  265442 retry.go:31] will retry after 333.220633ms: ssh: handshake failed: EOF
	I0908 10:34:58.780885  265442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 10:34:58.781050  265442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 10:34:58.977956  265442 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 10:34:58.977987  265442 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 10:34:58.998955  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 10:34:59.094332  265442 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 10:34:59.094409  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 10:34:59.177618  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 10:34:59.178920  265442 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 10:34:59.178949  265442 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 10:34:59.278051  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 10:34:59.283343  265442 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 10:34:59.283445  265442 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 10:34:59.284851  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 10:34:59.293831  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 10:34:59.296902  265442 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:34:59.296980  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 10:34:59.381967  265442 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 10:34:59.382059  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 10:34:59.383849  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 10:34:59.390221  265442 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 10:34:59.390325  265442 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 10:34:59.394907  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 10:34:59.486129  265442 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 10:34:59.486223  265442 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 10:34:59.487466  265442 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 10:34:59.487543  265442 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 10:34:59.588316  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 10:34:59.680207  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 10:34:59.686407  265442 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 10:34:59.686441  265442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 10:34:59.699695  265442 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 10:34:59.699732  265442 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 10:34:59.777251  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:34:59.982794  265442 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 10:34:59.982870  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 10:34:59.986710  265442 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 10:34:59.986787  265442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 10:34:59.990140  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 10:35:00.078762  265442 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 10:35:00.078860  265442 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 10:35:00.277359  265442 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 10:35:00.277477  265442 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 10:35:00.289346  265442 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 10:35:00.289439  265442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 10:35:00.380414  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 10:35:00.392717  265442 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 10:35:00.392803  265442 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 10:35:00.595052  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 10:35:01.086405  265442 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.305306596s)
	I0908 10:35:01.087678  265442 node_ready.go:35] waiting up to 6m0s for node "addons-310880" to be "Ready" ...
	I0908 10:35:01.088173  265442 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.307241918s)
	I0908 10:35:01.088245  265442 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0908 10:35:01.285201  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.286130793s)
	I0908 10:35:01.378348  265442 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 10:35:01.378445  265442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 10:35:01.400104  265442 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 10:35:01.400183  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 10:35:02.079880  265442 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 10:35:02.079969  265442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 10:35:02.094123  265442 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-310880" context rescaled to 1 replicas
	I0908 10:35:02.377465  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 10:35:02.483557  265442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 10:35:02.483658  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 10:35:02.677464  265442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 10:35:02.677500  265442 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0908 10:35:02.892082  265442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 10:35:02.892122  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 10:35:03.092560  265442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 10:35:03.092664  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	W0908 10:35:03.180728  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:03.293878  265442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 10:35:03.293915  265442 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 10:35:03.476917  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 10:35:03.786703  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.609034018s)
	I0908 10:35:03.786783  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.508639141s)
	I0908 10:35:03.786832  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.501907365s)
	I0908 10:35:04.803016  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.50902827s)
	I0908 10:35:04.803070  265442 addons.go:479] Verifying addon ingress=true in "addons-310880"
	I0908 10:35:04.803081  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.419114912s)
	I0908 10:35:04.803149  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.408140995s)
	I0908 10:35:04.803504  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.215094747s)
	I0908 10:35:04.803624  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.123306898s)
	I0908 10:35:04.803705  265442 addons.go:479] Verifying addon registry=true in "addons-310880"
	I0908 10:35:04.803762  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.026398296s)
	W0908 10:35:04.804119  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:04.804141  265442 retry.go:31] will retry after 140.238784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:04.803836  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.813629169s)
	I0908 10:35:04.803900  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.423391858s)
	I0908 10:35:04.803984  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.208822494s)
	I0908 10:35:04.804360  265442 addons.go:479] Verifying addon metrics-server=true in "addons-310880"
	I0908 10:35:04.804500  265442 out.go:179] * Verifying ingress addon...
	I0908 10:35:04.805299  265442 out.go:179] * Verifying registry addon...
	I0908 10:35:04.805993  265442 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-310880 service yakd-dashboard -n yakd-dashboard
	
	I0908 10:35:04.806709  265442 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 10:35:04.807188  265442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 10:35:04.878281  265442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 10:35:04.878308  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:04.878499  265442 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 10:35:04.878512  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:04.885928  265442 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0908 10:35:04.945208  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:35:05.309710  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:05.309875  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:05.591643  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:05.880535  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:05.881496  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:06.299388  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.921867744s)
	W0908 10:35:06.299512  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 10:35:06.299546  265442 retry.go:31] will retry after 197.419775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 10:35:06.299640  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.822609009s)
	I0908 10:35:06.299694  265442 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-310880"
	I0908 10:35:06.301219  265442 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 10:35:06.303319  265442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 10:35:06.307601  265442 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 10:35:06.307628  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:06.310772  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:06.310892  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:06.497983  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 10:35:06.508072  265442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 10:35:06.508177  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:35:06.527619  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:35:06.695865  265442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 10:35:06.703631  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.75838062s)
	W0908 10:35:06.703694  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:06.703720  265442 retry.go:31] will retry after 433.825871ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:06.716312  265442 addons.go:238] Setting addon gcp-auth=true in "addons-310880"
	I0908 10:35:06.716379  265442 host.go:66] Checking if "addons-310880" exists ...
	I0908 10:35:06.716877  265442 cli_runner.go:164] Run: docker container inspect addons-310880 --format={{.State.Status}}
	I0908 10:35:06.735947  265442 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 10:35:06.735997  265442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-310880
	I0908 10:35:06.756674  265442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/addons-310880/id_rsa Username:docker}
	I0908 10:35:06.809111  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:06.809807  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:06.810509  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:07.138644  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:35:07.307138  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:07.309480  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:07.310179  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:07.807434  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:07.809588  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:07.809684  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:08.091632  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:08.307007  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:08.309346  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:08.310193  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:08.806989  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:08.809271  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:08.810254  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:09.109566  265442 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.373579137s)
	I0908 10:35:09.109683  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.970994404s)
	W0908 10:35:09.109765  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:09.109794  265442 retry.go:31] will retry after 295.813424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:09.109566  265442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.611508572s)
	I0908 10:35:09.111293  265442 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 10:35:09.112814  265442 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 10:35:09.113930  265442 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 10:35:09.113946  265442 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 10:35:09.133134  265442 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 10:35:09.133169  265442 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 10:35:09.152398  265442 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 10:35:09.152423  265442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 10:35:09.171711  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 10:35:09.307299  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:09.309539  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:09.309738  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:09.405840  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:35:09.593347  265442 addons.go:479] Verifying addon gcp-auth=true in "addons-310880"
	I0908 10:35:09.594892  265442 out.go:179] * Verifying gcp-auth addon...
	I0908 10:35:09.596681  265442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 10:35:09.599532  265442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 10:35:09.599555  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:09.806827  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:09.810260  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:09.810626  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:10.099829  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:35:10.106219  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:10.106264  265442 retry.go:31] will retry after 970.387893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:10.306430  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:10.310034  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:10.310196  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:10.591072  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:10.599379  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:10.807072  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:10.809640  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:10.810113  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:11.077547  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:35:11.100070  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:11.307269  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:11.309988  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:11.310098  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:11.600513  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:35:11.647962  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:11.648005  265442 retry.go:31] will retry after 1.780032895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:11.807479  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:11.809312  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:11.809465  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:12.100644  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:12.307410  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:12.309444  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:12.309619  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:12.591879  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:12.599870  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:12.806978  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:12.809586  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:12.809858  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:13.099577  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:13.307045  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:13.309209  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:13.309945  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:13.428218  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:35:13.600426  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:13.807199  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:13.809961  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:13.810235  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:13.991755  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:13.991786  265442 retry.go:31] will retry after 968.283462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:14.099686  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:14.307571  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:14.309821  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:14.310004  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:14.600713  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:14.807167  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:14.809825  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:14.810598  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:14.960909  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0908 10:35:15.091281  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:15.100505  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:15.307226  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:15.309741  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:15.309801  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:15.525196  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:15.525237  265442 retry.go:31] will retry after 2.492744407s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:15.599876  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:15.807379  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:15.809663  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:15.809868  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:16.100006  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:16.307303  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:16.309894  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:16.310027  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:16.599751  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:16.806754  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:16.809140  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:16.809682  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:17.100244  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:17.306911  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:17.308994  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:17.309715  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 10:35:17.592051  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:17.599740  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:17.807505  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:17.810249  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:17.810341  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:18.018466  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:35:18.100155  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:18.307332  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:18.309546  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:18.309680  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:18.582577  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:18.582612  265442 retry.go:31] will retry after 2.565217542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:18.600684  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:18.807028  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:18.809742  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:18.810244  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:19.100657  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:19.306620  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:19.310112  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:19.310318  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:19.599857  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:19.806947  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:19.809170  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:19.809934  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 10:35:20.091262  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:20.099942  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:20.307236  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:20.309167  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:20.309359  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:20.600517  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:20.806606  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:20.810050  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:20.810121  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:21.100270  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:21.148420  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:35:21.307100  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:21.309501  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:21.310422  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:21.600578  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:35:21.720463  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:21.720508  265442 retry.go:31] will retry after 8.777177637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:21.806841  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:21.809390  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:21.810005  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 10:35:22.091685  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:22.100973  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:22.307426  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:22.309645  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:22.309822  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:22.600728  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:22.806754  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:22.809291  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:22.809773  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:23.100314  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:23.306323  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:23.309849  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:23.309966  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:23.600274  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:23.806417  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:23.809939  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:23.810146  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:24.100593  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:24.306750  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:24.309697  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:24.310055  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 10:35:24.591438  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:24.600244  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:24.807112  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:24.809401  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:24.810384  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:25.100239  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:25.307063  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:25.309412  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:25.310028  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:25.599948  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:25.807509  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:25.809574  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:25.809774  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:26.100349  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:26.306247  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:26.309905  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:26.310035  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:26.600516  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:26.807009  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:26.809711  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:26.810385  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 10:35:27.090797  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:27.099808  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:27.306560  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:27.309742  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:27.309863  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:27.599838  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:27.807528  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:27.813143  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:27.813200  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:28.100708  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:28.306903  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:28.309566  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:28.310508  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:28.600163  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:28.807137  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:28.809293  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:28.810040  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 10:35:29.091333  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:29.100451  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:29.306503  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:29.310237  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:29.310425  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:29.600447  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:29.806555  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:29.809838  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:29.810167  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:30.100889  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:30.306855  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:30.309681  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:30.310118  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:30.498397  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:35:30.599830  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:30.806689  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:30.810004  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:30.810214  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:31.065215  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:31.065252  265442 retry.go:31] will retry after 8.776182353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:31.100934  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:31.306964  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:31.309553  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:31.310154  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 10:35:31.591160  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:31.599897  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:31.808268  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:31.810101  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:31.810251  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:32.100260  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:32.307465  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:32.309656  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:32.309811  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:32.600274  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:32.807588  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:32.809820  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:32.809891  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:33.100306  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:33.306150  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:33.309161  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:33.309315  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:33.591248  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:33.600271  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:33.807201  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:33.809612  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:33.809802  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:34.101206  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:34.307471  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:34.310132  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:34.310381  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:34.599851  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:34.807209  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:34.809587  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:34.809805  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:35.100293  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:35.307417  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:35.309482  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:35.309661  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:35.591440  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:35.600354  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:35.806352  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:35.809981  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:35.810026  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:36.099864  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:36.307516  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:36.309662  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:36.309814  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:36.600494  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:36.806768  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:36.809504  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:36.809642  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:37.100472  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:37.306317  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:37.309756  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:37.309867  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:37.592145  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:37.599671  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:37.806886  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:37.809663  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:37.810290  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:38.100395  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:38.306307  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:38.309866  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:38.309959  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:38.600910  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:38.806941  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:38.810199  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:38.810704  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:39.100038  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:39.306941  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:39.309548  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:39.310149  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:39.599737  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:39.806907  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:39.809655  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:39.810693  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:39.841772  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0908 10:35:40.091376  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:40.100641  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:40.307369  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:40.309726  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:40.309949  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 10:35:40.408692  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:40.408728  265442 retry.go:31] will retry after 10.749650346s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:40.599926  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:40.806969  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:40.809421  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:40.810283  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:41.100476  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:41.306399  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:41.309491  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:41.309709  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:41.600468  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:41.806757  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:41.809378  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:41.809874  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 10:35:42.091451  265442 node_ready.go:57] node "addons-310880" has "Ready":"False" status (will retry)
	I0908 10:35:42.100969  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:42.307041  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:42.310083  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:42.310515  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:42.600883  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:42.807081  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:42.809586  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:42.810558  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:43.099942  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:43.309982  265442 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 10:35:43.310018  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:43.313097  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:43.313126  265442 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 10:35:43.313143  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:43.593260  265442 node_ready.go:49] node "addons-310880" is "Ready"
	I0908 10:35:43.593296  265442 node_ready.go:38] duration metric: took 42.505504201s for node "addons-310880" to be "Ready" ...
	I0908 10:35:43.593316  265442 api_server.go:52] waiting for apiserver process to appear ...
	I0908 10:35:43.593374  265442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 10:35:43.598909  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:43.607101  265442 api_server.go:72] duration metric: took 45.117674093s to wait for apiserver process to appear ...
	I0908 10:35:43.607130  265442 api_server.go:88] waiting for apiserver healthz status ...
	I0908 10:35:43.607164  265442 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0908 10:35:43.611578  265442 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0908 10:35:43.612552  265442 api_server.go:141] control plane version: v1.34.0
	I0908 10:35:43.612579  265442 api_server.go:131] duration metric: took 5.441735ms to wait for apiserver health ...
	I0908 10:35:43.612590  265442 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 10:35:43.616482  265442 system_pods.go:59] 20 kube-system pods found
	I0908 10:35:43.616516  265442 system_pods.go:61] "amd-gpu-device-plugin-8snfn" [f28519e1-a5b0-4c0d-88c6-881507390c2f] Pending
	I0908 10:35:43.616530  265442 system_pods.go:61] "coredns-66bc5c9577-96ndd" [e698edec-2eb5-415f-b070-531b9754c2c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 10:35:43.616543  265442 system_pods.go:61] "csi-hostpath-attacher-0" [6ce6ecfc-6319-45ac-8359-30360a15e414] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 10:35:43.616559  265442 system_pods.go:61] "csi-hostpath-resizer-0" [235a1795-6853-48a4-8ec1-4720b018ea6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 10:35:43.616566  265442 system_pods.go:61] "csi-hostpathplugin-q2nfv" [e17735bc-e418-4505-82a8-6715d4e39aa4] Pending
	I0908 10:35:43.616575  265442 system_pods.go:61] "etcd-addons-310880" [024ead74-1445-44e3-b414-8f8b44fb4b45] Running
	I0908 10:35:43.616580  265442 system_pods.go:61] "kindnet-wnvgd" [7e129681-96c7-4ea9-9998-36f590d8b2ae] Running
	I0908 10:35:43.616585  265442 system_pods.go:61] "kube-apiserver-addons-310880" [78df989b-6a2c-4800-9f16-85efac552288] Running
	I0908 10:35:43.616590  265442 system_pods.go:61] "kube-controller-manager-addons-310880" [05d8a8f5-cc71-40f7-b2f8-3c62bd1cfd18] Running
	I0908 10:35:43.616599  265442 system_pods.go:61] "kube-ingress-dns-minikube" [026d4ce6-6619-4d1b-a1ce-c748974fe36e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 10:35:43.616606  265442 system_pods.go:61] "kube-proxy-rtvsz" [bb903bc3-49ef-4c7e-ae94-42fc231cd86b] Running
	I0908 10:35:43.616610  265442 system_pods.go:61] "kube-scheduler-addons-310880" [4c9ca113-991d-41a3-b6e4-e23d9a201800] Running
	I0908 10:35:43.616615  265442 system_pods.go:61] "metrics-server-85b7d694d7-ncmm5" [c170effb-94ae-4cc5-a6af-1f91971345c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 10:35:43.616624  265442 system_pods.go:61] "nvidia-device-plugin-daemonset-p887r" [3683dae7-40f7-454e-ab29-2bcead4c809b] Pending
	I0908 10:35:43.616632  265442 system_pods.go:61] "registry-66898fdd98-v2kw8" [441d3a0d-f394-4350-a2e8-97c6310b39a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 10:35:43.616643  265442 system_pods.go:61] "registry-creds-764b6fb674-5wnbc" [d05d8963-0b43-43ea-abf8-504e9b5125be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 10:35:43.616655  265442 system_pods.go:61] "registry-proxy-kcvdc" [cb54aa87-60bf-455c-89fb-e4717dde0d00] Pending
	I0908 10:35:43.616669  265442 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4jcbb" [e222ef00-3a8b-472b-96b2-e2c3ea7f3565] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:43.616678  265442 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sqhjn" [600c376c-d99b-4a9c-9829-2c4ef5c0b26c] Pending
	I0908 10:35:43.616686  265442 system_pods.go:61] "storage-provisioner" [8dc115ef-c87a-493b-98b5-1a042b06e028] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 10:35:43.616696  265442 system_pods.go:74] duration metric: took 4.099009ms to wait for pod list to return data ...
	I0908 10:35:43.616709  265442 default_sa.go:34] waiting for default service account to be created ...
	I0908 10:35:43.619203  265442 default_sa.go:45] found service account: "default"
	I0908 10:35:43.619226  265442 default_sa.go:55] duration metric: took 2.511023ms for default service account to be created ...
	I0908 10:35:43.619237  265442 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 10:35:43.622757  265442 system_pods.go:86] 20 kube-system pods found
	I0908 10:35:43.622788  265442 system_pods.go:89] "amd-gpu-device-plugin-8snfn" [f28519e1-a5b0-4c0d-88c6-881507390c2f] Pending
	I0908 10:35:43.622800  265442 system_pods.go:89] "coredns-66bc5c9577-96ndd" [e698edec-2eb5-415f-b070-531b9754c2c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 10:35:43.622809  265442 system_pods.go:89] "csi-hostpath-attacher-0" [6ce6ecfc-6319-45ac-8359-30360a15e414] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 10:35:43.622817  265442 system_pods.go:89] "csi-hostpath-resizer-0" [235a1795-6853-48a4-8ec1-4720b018ea6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 10:35:43.622822  265442 system_pods.go:89] "csi-hostpathplugin-q2nfv" [e17735bc-e418-4505-82a8-6715d4e39aa4] Pending
	I0908 10:35:43.622828  265442 system_pods.go:89] "etcd-addons-310880" [024ead74-1445-44e3-b414-8f8b44fb4b45] Running
	I0908 10:35:43.622833  265442 system_pods.go:89] "kindnet-wnvgd" [7e129681-96c7-4ea9-9998-36f590d8b2ae] Running
	I0908 10:35:43.622839  265442 system_pods.go:89] "kube-apiserver-addons-310880" [78df989b-6a2c-4800-9f16-85efac552288] Running
	I0908 10:35:43.622847  265442 system_pods.go:89] "kube-controller-manager-addons-310880" [05d8a8f5-cc71-40f7-b2f8-3c62bd1cfd18] Running
	I0908 10:35:43.622858  265442 system_pods.go:89] "kube-ingress-dns-minikube" [026d4ce6-6619-4d1b-a1ce-c748974fe36e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 10:35:43.622865  265442 system_pods.go:89] "kube-proxy-rtvsz" [bb903bc3-49ef-4c7e-ae94-42fc231cd86b] Running
	I0908 10:35:43.622875  265442 system_pods.go:89] "kube-scheduler-addons-310880" [4c9ca113-991d-41a3-b6e4-e23d9a201800] Running
	I0908 10:35:43.622881  265442 system_pods.go:89] "metrics-server-85b7d694d7-ncmm5" [c170effb-94ae-4cc5-a6af-1f91971345c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 10:35:43.622890  265442 system_pods.go:89] "nvidia-device-plugin-daemonset-p887r" [3683dae7-40f7-454e-ab29-2bcead4c809b] Pending
	I0908 10:35:43.622900  265442 system_pods.go:89] "registry-66898fdd98-v2kw8" [441d3a0d-f394-4350-a2e8-97c6310b39a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 10:35:43.622910  265442 system_pods.go:89] "registry-creds-764b6fb674-5wnbc" [d05d8963-0b43-43ea-abf8-504e9b5125be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 10:35:43.622921  265442 system_pods.go:89] "registry-proxy-kcvdc" [cb54aa87-60bf-455c-89fb-e4717dde0d00] Pending
	I0908 10:35:43.622933  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4jcbb" [e222ef00-3a8b-472b-96b2-e2c3ea7f3565] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:43.622941  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sqhjn" [600c376c-d99b-4a9c-9829-2c4ef5c0b26c] Pending
	I0908 10:35:43.622952  265442 system_pods.go:89] "storage-provisioner" [8dc115ef-c87a-493b-98b5-1a042b06e028] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 10:35:43.622990  265442 retry.go:31] will retry after 214.925245ms: missing components: kube-dns
	I0908 10:35:43.808478  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:43.811762  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:43.909243  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:43.980010  265442 system_pods.go:86] 20 kube-system pods found
	I0908 10:35:43.980055  265442 system_pods.go:89] "amd-gpu-device-plugin-8snfn" [f28519e1-a5b0-4c0d-88c6-881507390c2f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 10:35:43.980067  265442 system_pods.go:89] "coredns-66bc5c9577-96ndd" [e698edec-2eb5-415f-b070-531b9754c2c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 10:35:43.980077  265442 system_pods.go:89] "csi-hostpath-attacher-0" [6ce6ecfc-6319-45ac-8359-30360a15e414] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 10:35:43.980089  265442 system_pods.go:89] "csi-hostpath-resizer-0" [235a1795-6853-48a4-8ec1-4720b018ea6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 10:35:43.980106  265442 system_pods.go:89] "csi-hostpathplugin-q2nfv" [e17735bc-e418-4505-82a8-6715d4e39aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 10:35:43.980121  265442 system_pods.go:89] "etcd-addons-310880" [024ead74-1445-44e3-b414-8f8b44fb4b45] Running
	I0908 10:35:43.980134  265442 system_pods.go:89] "kindnet-wnvgd" [7e129681-96c7-4ea9-9998-36f590d8b2ae] Running
	I0908 10:35:43.980140  265442 system_pods.go:89] "kube-apiserver-addons-310880" [78df989b-6a2c-4800-9f16-85efac552288] Running
	I0908 10:35:43.980147  265442 system_pods.go:89] "kube-controller-manager-addons-310880" [05d8a8f5-cc71-40f7-b2f8-3c62bd1cfd18] Running
	I0908 10:35:43.980158  265442 system_pods.go:89] "kube-ingress-dns-minikube" [026d4ce6-6619-4d1b-a1ce-c748974fe36e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 10:35:43.980164  265442 system_pods.go:89] "kube-proxy-rtvsz" [bb903bc3-49ef-4c7e-ae94-42fc231cd86b] Running
	I0908 10:35:43.980170  265442 system_pods.go:89] "kube-scheduler-addons-310880" [4c9ca113-991d-41a3-b6e4-e23d9a201800] Running
	I0908 10:35:43.980178  265442 system_pods.go:89] "metrics-server-85b7d694d7-ncmm5" [c170effb-94ae-4cc5-a6af-1f91971345c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 10:35:43.980191  265442 system_pods.go:89] "nvidia-device-plugin-daemonset-p887r" [3683dae7-40f7-454e-ab29-2bcead4c809b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 10:35:43.980208  265442 system_pods.go:89] "registry-66898fdd98-v2kw8" [441d3a0d-f394-4350-a2e8-97c6310b39a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 10:35:43.980220  265442 system_pods.go:89] "registry-creds-764b6fb674-5wnbc" [d05d8963-0b43-43ea-abf8-504e9b5125be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 10:35:43.980229  265442 system_pods.go:89] "registry-proxy-kcvdc" [cb54aa87-60bf-455c-89fb-e4717dde0d00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 10:35:43.980240  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4jcbb" [e222ef00-3a8b-472b-96b2-e2c3ea7f3565] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:43.980254  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sqhjn" [600c376c-d99b-4a9c-9829-2c4ef5c0b26c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:43.980268  265442 system_pods.go:89] "storage-provisioner" [8dc115ef-c87a-493b-98b5-1a042b06e028] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 10:35:43.980292  265442 retry.go:31] will retry after 291.831196ms: missing components: kube-dns
	I0908 10:35:44.101665  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:44.282505  265442 system_pods.go:86] 20 kube-system pods found
	I0908 10:35:44.282588  265442 system_pods.go:89] "amd-gpu-device-plugin-8snfn" [f28519e1-a5b0-4c0d-88c6-881507390c2f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 10:35:44.282599  265442 system_pods.go:89] "coredns-66bc5c9577-96ndd" [e698edec-2eb5-415f-b070-531b9754c2c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 10:35:44.282608  265442 system_pods.go:89] "csi-hostpath-attacher-0" [6ce6ecfc-6319-45ac-8359-30360a15e414] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 10:35:44.282618  265442 system_pods.go:89] "csi-hostpath-resizer-0" [235a1795-6853-48a4-8ec1-4720b018ea6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 10:35:44.282627  265442 system_pods.go:89] "csi-hostpathplugin-q2nfv" [e17735bc-e418-4505-82a8-6715d4e39aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 10:35:44.282674  265442 system_pods.go:89] "etcd-addons-310880" [024ead74-1445-44e3-b414-8f8b44fb4b45] Running
	I0908 10:35:44.282686  265442 system_pods.go:89] "kindnet-wnvgd" [7e129681-96c7-4ea9-9998-36f590d8b2ae] Running
	I0908 10:35:44.282693  265442 system_pods.go:89] "kube-apiserver-addons-310880" [78df989b-6a2c-4800-9f16-85efac552288] Running
	I0908 10:35:44.282702  265442 system_pods.go:89] "kube-controller-manager-addons-310880" [05d8a8f5-cc71-40f7-b2f8-3c62bd1cfd18] Running
	I0908 10:35:44.282712  265442 system_pods.go:89] "kube-ingress-dns-minikube" [026d4ce6-6619-4d1b-a1ce-c748974fe36e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 10:35:44.282718  265442 system_pods.go:89] "kube-proxy-rtvsz" [bb903bc3-49ef-4c7e-ae94-42fc231cd86b] Running
	I0908 10:35:44.282745  265442 system_pods.go:89] "kube-scheduler-addons-310880" [4c9ca113-991d-41a3-b6e4-e23d9a201800] Running
	I0908 10:35:44.282760  265442 system_pods.go:89] "metrics-server-85b7d694d7-ncmm5" [c170effb-94ae-4cc5-a6af-1f91971345c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 10:35:44.282769  265442 system_pods.go:89] "nvidia-device-plugin-daemonset-p887r" [3683dae7-40f7-454e-ab29-2bcead4c809b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 10:35:44.282781  265442 system_pods.go:89] "registry-66898fdd98-v2kw8" [441d3a0d-f394-4350-a2e8-97c6310b39a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 10:35:44.282793  265442 system_pods.go:89] "registry-creds-764b6fb674-5wnbc" [d05d8963-0b43-43ea-abf8-504e9b5125be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 10:35:44.282801  265442 system_pods.go:89] "registry-proxy-kcvdc" [cb54aa87-60bf-455c-89fb-e4717dde0d00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 10:35:44.282809  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4jcbb" [e222ef00-3a8b-472b-96b2-e2c3ea7f3565] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:44.282822  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sqhjn" [600c376c-d99b-4a9c-9829-2c4ef5c0b26c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:44.282838  265442 system_pods.go:89] "storage-provisioner" [8dc115ef-c87a-493b-98b5-1a042b06e028] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 10:35:44.282863  265442 retry.go:31] will retry after 415.581763ms: missing components: kube-dns
	I0908 10:35:44.379969  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:44.380021  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:44.380212  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:44.601093  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:44.704009  265442 system_pods.go:86] 20 kube-system pods found
	I0908 10:35:44.704051  265442 system_pods.go:89] "amd-gpu-device-plugin-8snfn" [f28519e1-a5b0-4c0d-88c6-881507390c2f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 10:35:44.704063  265442 system_pods.go:89] "coredns-66bc5c9577-96ndd" [e698edec-2eb5-415f-b070-531b9754c2c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 10:35:44.704075  265442 system_pods.go:89] "csi-hostpath-attacher-0" [6ce6ecfc-6319-45ac-8359-30360a15e414] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 10:35:44.704084  265442 system_pods.go:89] "csi-hostpath-resizer-0" [235a1795-6853-48a4-8ec1-4720b018ea6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 10:35:44.704092  265442 system_pods.go:89] "csi-hostpathplugin-q2nfv" [e17735bc-e418-4505-82a8-6715d4e39aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 10:35:44.704146  265442 system_pods.go:89] "etcd-addons-310880" [024ead74-1445-44e3-b414-8f8b44fb4b45] Running
	I0908 10:35:44.704157  265442 system_pods.go:89] "kindnet-wnvgd" [7e129681-96c7-4ea9-9998-36f590d8b2ae] Running
	I0908 10:35:44.704162  265442 system_pods.go:89] "kube-apiserver-addons-310880" [78df989b-6a2c-4800-9f16-85efac552288] Running
	I0908 10:35:44.704168  265442 system_pods.go:89] "kube-controller-manager-addons-310880" [05d8a8f5-cc71-40f7-b2f8-3c62bd1cfd18] Running
	I0908 10:35:44.704180  265442 system_pods.go:89] "kube-ingress-dns-minikube" [026d4ce6-6619-4d1b-a1ce-c748974fe36e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 10:35:44.704185  265442 system_pods.go:89] "kube-proxy-rtvsz" [bb903bc3-49ef-4c7e-ae94-42fc231cd86b] Running
	I0908 10:35:44.704190  265442 system_pods.go:89] "kube-scheduler-addons-310880" [4c9ca113-991d-41a3-b6e4-e23d9a201800] Running
	I0908 10:35:44.704200  265442 system_pods.go:89] "metrics-server-85b7d694d7-ncmm5" [c170effb-94ae-4cc5-a6af-1f91971345c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 10:35:44.704213  265442 system_pods.go:89] "nvidia-device-plugin-daemonset-p887r" [3683dae7-40f7-454e-ab29-2bcead4c809b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 10:35:44.704224  265442 system_pods.go:89] "registry-66898fdd98-v2kw8" [441d3a0d-f394-4350-a2e8-97c6310b39a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 10:35:44.704236  265442 system_pods.go:89] "registry-creds-764b6fb674-5wnbc" [d05d8963-0b43-43ea-abf8-504e9b5125be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 10:35:44.704249  265442 system_pods.go:89] "registry-proxy-kcvdc" [cb54aa87-60bf-455c-89fb-e4717dde0d00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 10:35:44.704259  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4jcbb" [e222ef00-3a8b-472b-96b2-e2c3ea7f3565] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:44.704271  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sqhjn" [600c376c-d99b-4a9c-9829-2c4ef5c0b26c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:44.704281  265442 system_pods.go:89] "storage-provisioner" [8dc115ef-c87a-493b-98b5-1a042b06e028] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 10:35:44.704306  265442 retry.go:31] will retry after 371.05423ms: missing components: kube-dns
	I0908 10:35:44.807715  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:44.809883  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:44.809959  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:45.081640  265442 system_pods.go:86] 20 kube-system pods found
	I0908 10:35:45.081685  265442 system_pods.go:89] "amd-gpu-device-plugin-8snfn" [f28519e1-a5b0-4c0d-88c6-881507390c2f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 10:35:45.081698  265442 system_pods.go:89] "coredns-66bc5c9577-96ndd" [e698edec-2eb5-415f-b070-531b9754c2c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 10:35:45.081709  265442 system_pods.go:89] "csi-hostpath-attacher-0" [6ce6ecfc-6319-45ac-8359-30360a15e414] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 10:35:45.081718  265442 system_pods.go:89] "csi-hostpath-resizer-0" [235a1795-6853-48a4-8ec1-4720b018ea6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 10:35:45.081730  265442 system_pods.go:89] "csi-hostpathplugin-q2nfv" [e17735bc-e418-4505-82a8-6715d4e39aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 10:35:45.081737  265442 system_pods.go:89] "etcd-addons-310880" [024ead74-1445-44e3-b414-8f8b44fb4b45] Running
	I0908 10:35:45.081746  265442 system_pods.go:89] "kindnet-wnvgd" [7e129681-96c7-4ea9-9998-36f590d8b2ae] Running
	I0908 10:35:45.081751  265442 system_pods.go:89] "kube-apiserver-addons-310880" [78df989b-6a2c-4800-9f16-85efac552288] Running
	I0908 10:35:45.081759  265442 system_pods.go:89] "kube-controller-manager-addons-310880" [05d8a8f5-cc71-40f7-b2f8-3c62bd1cfd18] Running
	I0908 10:35:45.081771  265442 system_pods.go:89] "kube-ingress-dns-minikube" [026d4ce6-6619-4d1b-a1ce-c748974fe36e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 10:35:45.081776  265442 system_pods.go:89] "kube-proxy-rtvsz" [bb903bc3-49ef-4c7e-ae94-42fc231cd86b] Running
	I0908 10:35:45.081783  265442 system_pods.go:89] "kube-scheduler-addons-310880" [4c9ca113-991d-41a3-b6e4-e23d9a201800] Running
	I0908 10:35:45.081796  265442 system_pods.go:89] "metrics-server-85b7d694d7-ncmm5" [c170effb-94ae-4cc5-a6af-1f91971345c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 10:35:45.081807  265442 system_pods.go:89] "nvidia-device-plugin-daemonset-p887r" [3683dae7-40f7-454e-ab29-2bcead4c809b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 10:35:45.081817  265442 system_pods.go:89] "registry-66898fdd98-v2kw8" [441d3a0d-f394-4350-a2e8-97c6310b39a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 10:35:45.081828  265442 system_pods.go:89] "registry-creds-764b6fb674-5wnbc" [d05d8963-0b43-43ea-abf8-504e9b5125be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 10:35:45.081836  265442 system_pods.go:89] "registry-proxy-kcvdc" [cb54aa87-60bf-455c-89fb-e4717dde0d00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 10:35:45.081847  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4jcbb" [e222ef00-3a8b-472b-96b2-e2c3ea7f3565] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:45.081858  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sqhjn" [600c376c-d99b-4a9c-9829-2c4ef5c0b26c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:45.081868  265442 system_pods.go:89] "storage-provisioner" [8dc115ef-c87a-493b-98b5-1a042b06e028] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 10:35:45.081888  265442 retry.go:31] will retry after 657.362886ms: missing components: kube-dns
	I0908 10:35:45.100752  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:45.307679  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:45.309842  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:45.309992  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:45.600206  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:45.745159  265442 system_pods.go:86] 20 kube-system pods found
	I0908 10:35:45.745193  265442 system_pods.go:89] "amd-gpu-device-plugin-8snfn" [f28519e1-a5b0-4c0d-88c6-881507390c2f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 10:35:45.745200  265442 system_pods.go:89] "coredns-66bc5c9577-96ndd" [e698edec-2eb5-415f-b070-531b9754c2c3] Running
	I0908 10:35:45.745209  265442 system_pods.go:89] "csi-hostpath-attacher-0" [6ce6ecfc-6319-45ac-8359-30360a15e414] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 10:35:45.745215  265442 system_pods.go:89] "csi-hostpath-resizer-0" [235a1795-6853-48a4-8ec1-4720b018ea6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 10:35:45.745221  265442 system_pods.go:89] "csi-hostpathplugin-q2nfv" [e17735bc-e418-4505-82a8-6715d4e39aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 10:35:45.745225  265442 system_pods.go:89] "etcd-addons-310880" [024ead74-1445-44e3-b414-8f8b44fb4b45] Running
	I0908 10:35:45.745230  265442 system_pods.go:89] "kindnet-wnvgd" [7e129681-96c7-4ea9-9998-36f590d8b2ae] Running
	I0908 10:35:45.745233  265442 system_pods.go:89] "kube-apiserver-addons-310880" [78df989b-6a2c-4800-9f16-85efac552288] Running
	I0908 10:35:45.745237  265442 system_pods.go:89] "kube-controller-manager-addons-310880" [05d8a8f5-cc71-40f7-b2f8-3c62bd1cfd18] Running
	I0908 10:35:45.745246  265442 system_pods.go:89] "kube-ingress-dns-minikube" [026d4ce6-6619-4d1b-a1ce-c748974fe36e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 10:35:45.745249  265442 system_pods.go:89] "kube-proxy-rtvsz" [bb903bc3-49ef-4c7e-ae94-42fc231cd86b] Running
	I0908 10:35:45.745253  265442 system_pods.go:89] "kube-scheduler-addons-310880" [4c9ca113-991d-41a3-b6e4-e23d9a201800] Running
	I0908 10:35:45.745261  265442 system_pods.go:89] "metrics-server-85b7d694d7-ncmm5" [c170effb-94ae-4cc5-a6af-1f91971345c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 10:35:45.745266  265442 system_pods.go:89] "nvidia-device-plugin-daemonset-p887r" [3683dae7-40f7-454e-ab29-2bcead4c809b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 10:35:45.745271  265442 system_pods.go:89] "registry-66898fdd98-v2kw8" [441d3a0d-f394-4350-a2e8-97c6310b39a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 10:35:45.745278  265442 system_pods.go:89] "registry-creds-764b6fb674-5wnbc" [d05d8963-0b43-43ea-abf8-504e9b5125be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 10:35:45.745283  265442 system_pods.go:89] "registry-proxy-kcvdc" [cb54aa87-60bf-455c-89fb-e4717dde0d00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 10:35:45.745290  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4jcbb" [e222ef00-3a8b-472b-96b2-e2c3ea7f3565] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:45.745295  265442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sqhjn" [600c376c-d99b-4a9c-9829-2c4ef5c0b26c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 10:35:45.745299  265442 system_pods.go:89] "storage-provisioner" [8dc115ef-c87a-493b-98b5-1a042b06e028] Running
	I0908 10:35:45.745307  265442 system_pods.go:126] duration metric: took 2.126065061s to wait for k8s-apps to be running ...
	I0908 10:35:45.745317  265442 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 10:35:45.745365  265442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 10:35:45.756930  265442 system_svc.go:56] duration metric: took 11.603399ms WaitForService to wait for kubelet
	I0908 10:35:45.756960  265442 kubeadm.go:578] duration metric: took 47.267539235s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 10:35:45.756983  265442 node_conditions.go:102] verifying NodePressure condition ...
	I0908 10:35:45.759953  265442 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 10:35:45.759988  265442 node_conditions.go:123] node cpu capacity is 8
	I0908 10:35:45.760007  265442 node_conditions.go:105] duration metric: took 3.015214ms to run NodePressure ...
	I0908 10:35:45.760025  265442 start.go:241] waiting for startup goroutines ...
	I0908 10:35:45.807155  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:45.809663  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:45.810447  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:46.100748  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:46.307644  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:46.309712  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:46.309771  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:46.600429  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:46.879039  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:46.882026  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:46.882349  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:47.101186  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:47.308236  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:47.310289  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:47.310335  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:47.600696  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:47.807769  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:47.810214  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:47.810396  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:48.100690  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:48.307703  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:48.310081  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:48.310306  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:48.601285  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:48.807394  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:48.810361  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:48.814258  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:49.101544  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:49.307519  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:49.309824  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:49.310187  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:49.600649  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:49.807189  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:49.809459  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:49.809713  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:50.100541  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:50.307438  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:50.309667  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:50.309733  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:50.600604  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:50.806945  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:50.809546  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:50.809797  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:51.100380  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:51.159520  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:35:51.307587  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:51.309639  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:51.309760  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:51.600367  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:51.808033  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:51.810044  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:51.810078  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:52.100141  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 10:35:52.105620  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:52.105659  265442 retry.go:31] will retry after 21.289294592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:35:52.307062  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:52.309420  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:52.310094  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:52.602192  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:52.809016  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:52.810379  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:52.810990  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:53.100437  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:53.307022  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:53.309208  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:53.309845  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:53.599967  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:53.807179  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:53.809142  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:53.810038  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:54.101136  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:54.308455  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:54.379124  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:54.380008  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:54.600878  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:54.807307  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:54.879811  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:54.879991  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:55.100158  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:55.308863  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:55.311020  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:55.311077  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:55.600959  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:55.807345  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:55.809535  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:55.809543  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:56.099779  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:56.307231  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:56.309618  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:56.309623  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:56.600438  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:56.806907  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:56.809686  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:56.809816  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:57.100728  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:57.309608  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:57.309616  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:57.310005  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:57.600575  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:57.806977  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:57.879462  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:57.879536  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:58.100072  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:58.307947  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:58.310539  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:58.310934  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:58.601346  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:58.808230  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:58.878249  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:58.878317  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:59.100339  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:59.307810  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:59.377160  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:35:59.377212  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:59.600650  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:35:59.807271  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:35:59.809800  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:35:59.809948  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:00.100334  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:00.306969  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:00.309666  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:00.310241  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:00.601012  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:00.807524  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:00.809972  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:00.810023  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:01.100321  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:01.306613  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:01.310710  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:01.310791  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:01.600242  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:01.807889  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:01.809947  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:01.810080  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:02.100393  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:02.307358  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:02.309820  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:02.310262  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:02.601593  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:02.807187  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:02.809883  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:02.810166  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:03.101222  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:03.308079  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:03.310928  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:03.311149  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:03.600119  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:03.807870  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:03.810218  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:03.810307  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:04.100720  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:04.309409  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:04.310020  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:04.310160  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:04.600847  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:04.807716  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:04.809960  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:04.810185  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:05.100700  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:05.307638  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:05.310088  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:05.310233  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:05.600378  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:05.809390  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:05.812506  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:05.812759  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:06.100860  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:06.307449  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:06.310860  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:06.313208  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:06.601012  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:06.807602  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:06.810476  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:06.810519  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:07.100792  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:07.307482  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:07.310930  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:07.311013  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:07.600822  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:07.807819  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:07.810048  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:07.810029  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:08.100710  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:08.306656  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:08.311520  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:08.311580  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:08.601870  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:08.807759  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:08.810068  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:08.810320  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:09.100544  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:09.307080  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:09.309455  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:09.310864  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:09.600024  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:09.807591  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:09.809706  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:09.809774  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:10.100275  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:10.307161  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:10.309763  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:10.310436  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:10.601032  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:10.807729  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:10.809891  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:10.809920  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:11.100076  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:11.307300  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:11.309835  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:11.309865  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:11.599905  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:11.807427  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:11.809511  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:11.809595  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:12.100214  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:12.308172  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:12.310623  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:12.310623  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:12.600871  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:12.807501  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:12.809958  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:12.811045  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:13.100438  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:13.306703  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:13.310129  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:13.310155  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:13.395211  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 10:36:13.601400  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:13.807329  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:13.809716  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:13.809935  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 10:36:14.086782  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:36:14.086816  265442 retry.go:31] will retry after 44.767289953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 10:36:14.099748  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:14.307335  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:14.309482  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 10:36:14.309701  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:14.600364  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:14.806586  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:14.809751  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:14.811024  265442 kapi.go:107] duration metric: took 1m10.003833353s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 10:36:15.100401  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:15.307117  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:15.309253  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:15.600712  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:15.807000  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:15.809674  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:16.099894  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:16.307634  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:16.309927  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:16.600477  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:16.806769  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:16.809398  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:17.099943  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:17.307102  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:17.309545  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:17.601438  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:17.884745  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:17.885670  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:18.101163  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:18.381911  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:18.382312  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:18.676672  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:18.880834  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:18.880928  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:19.100516  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:19.378325  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:19.379872  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:19.600399  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:19.807118  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:19.809794  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:20.100324  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:20.308019  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:20.310798  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:20.600339  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:20.807235  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:20.810007  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:21.100833  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:21.307775  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:21.310155  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:21.600372  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:21.807278  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:21.809709  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:22.100384  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:22.308195  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:22.310862  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:22.654111  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:22.808893  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:22.810901  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:23.101136  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:23.308110  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:23.310120  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:23.616466  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:23.807711  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:23.810352  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:24.101369  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:24.307061  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:24.309572  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:24.601148  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:24.808627  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:24.810263  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:25.100433  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:25.306825  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:25.309587  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:25.601412  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:25.807111  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:25.810168  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:26.100171  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:26.307914  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:26.310101  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:26.600101  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:26.809346  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:26.810442  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:27.101224  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:27.307329  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:27.309056  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:27.601027  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:27.880006  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:27.880225  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:28.100819  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:28.308315  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:28.310518  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:28.600616  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:28.878818  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:28.879165  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:29.101096  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:29.379062  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:29.379173  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:29.600280  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:29.806812  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:29.809729  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:30.099546  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:30.307013  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:30.309237  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:30.600623  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:30.807609  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:30.809779  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:31.099774  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:31.307006  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:31.309539  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:31.600992  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:31.807618  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:31.810338  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:32.100778  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:32.306907  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:32.309719  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:32.600391  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:32.811200  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:32.811464  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:33.100325  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:33.306613  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:33.310054  265442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 10:36:33.601209  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:33.807971  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:33.810013  265442 kapi.go:107] duration metric: took 1m29.003301328s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 10:36:34.100938  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:34.313454  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:34.601058  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:34.808045  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:35.100458  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:35.306891  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:35.599978  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:35.807609  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:36.100856  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:36.307076  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:36.600473  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:36.806791  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:37.101070  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:37.307391  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:37.600469  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:37.807162  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:38.100378  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:38.306599  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:38.600785  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:38.807095  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:39.100740  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:39.306994  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:39.600080  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:39.807814  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:40.099924  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:40.307173  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:40.600226  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:40.807901  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:41.100341  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:41.306519  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:41.600023  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:41.807815  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:42.100437  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:42.306856  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:42.599829  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:42.808310  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:43.100501  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:43.306794  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:43.599622  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:43.807167  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:44.099828  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:44.307386  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:44.601098  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:44.807828  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:45.100854  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:45.307696  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:45.600595  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:45.807004  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:46.100341  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:46.306596  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:46.600598  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:46.806928  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:47.100638  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:47.307306  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:47.600397  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:47.806600  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:48.101530  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:48.307128  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:48.600395  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:48.806844  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:49.100520  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:49.307122  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:49.600970  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:49.807765  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:50.100105  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:50.308002  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:50.600799  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:50.806883  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:51.100295  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:51.307243  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:51.600559  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:51.807219  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:52.100967  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:52.307328  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:52.600871  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:52.807318  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:53.101346  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:53.306906  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:53.600632  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 10:36:53.880704  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:54.099894  265442 kapi.go:107] duration metric: took 1m44.503210615s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 10:36:54.101241  265442 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-310880 cluster.
	I0908 10:36:54.102316  265442 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 10:36:54.103708  265442 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 10:36:54.307125  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:54.808381  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:55.307583  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:55.808263  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:56.307812  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:56.806972  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:57.308281  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:57.807462  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:58.307906  265442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 10:36:58.808127  265442 kapi.go:107] duration metric: took 1m52.504808205s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 10:36:58.855146  265442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0908 10:36:59.404598  265442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 10:36:59.404720  265442 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 10:36:59.406313  265442 out.go:179] * Enabled addons: registry-creds, storage-provisioner, amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0908 10:36:59.407484  265442 addons.go:514] duration metric: took 2m0.918039728s for enable addons: enabled=[registry-creds storage-provisioner amd-gpu-device-plugin ingress-dns nvidia-device-plugin cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0908 10:36:59.407528  265442 start.go:246] waiting for cluster config update ...
	I0908 10:36:59.407547  265442 start.go:255] writing updated cluster config ...
	I0908 10:36:59.407830  265442 ssh_runner.go:195] Run: rm -f paused
	I0908 10:36:59.411201  265442 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 10:36:59.414698  265442 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-96ndd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:36:59.418994  265442 pod_ready.go:94] pod "coredns-66bc5c9577-96ndd" is "Ready"
	I0908 10:36:59.419021  265442 pod_ready.go:86] duration metric: took 4.302957ms for pod "coredns-66bc5c9577-96ndd" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:36:59.420966  265442 pod_ready.go:83] waiting for pod "etcd-addons-310880" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:36:59.424691  265442 pod_ready.go:94] pod "etcd-addons-310880" is "Ready"
	I0908 10:36:59.424717  265442 pod_ready.go:86] duration metric: took 3.730334ms for pod "etcd-addons-310880" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:36:59.426578  265442 pod_ready.go:83] waiting for pod "kube-apiserver-addons-310880" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:36:59.430587  265442 pod_ready.go:94] pod "kube-apiserver-addons-310880" is "Ready"
	I0908 10:36:59.430615  265442 pod_ready.go:86] duration metric: took 4.015057ms for pod "kube-apiserver-addons-310880" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:36:59.432622  265442 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-310880" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:36:59.815067  265442 pod_ready.go:94] pod "kube-controller-manager-addons-310880" is "Ready"
	I0908 10:36:59.815108  265442 pod_ready.go:86] duration metric: took 382.465632ms for pod "kube-controller-manager-addons-310880" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:37:00.015335  265442 pod_ready.go:83] waiting for pod "kube-proxy-rtvsz" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:37:00.415440  265442 pod_ready.go:94] pod "kube-proxy-rtvsz" is "Ready"
	I0908 10:37:00.415473  265442 pod_ready.go:86] duration metric: took 400.11014ms for pod "kube-proxy-rtvsz" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:37:00.616302  265442 pod_ready.go:83] waiting for pod "kube-scheduler-addons-310880" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:37:01.015063  265442 pod_ready.go:94] pod "kube-scheduler-addons-310880" is "Ready"
	I0908 10:37:01.015091  265442 pod_ready.go:86] duration metric: took 398.757915ms for pod "kube-scheduler-addons-310880" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 10:37:01.015104  265442 pod_ready.go:40] duration metric: took 1.60387193s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 10:37:01.072398  265442 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 10:37:01.074899  265442 out.go:179] * Done! kubectl is now configured to use "addons-310880" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 10:38:53 addons-310880 crio[1050]: time="2025-09-08 10:38:53.361502186Z" level=info msg="Removed pod sandbox: 527c698802031c41d10c35a5bed2df3b6d9213bfc386ffe816ec4285d39c41ab" id=bde08ca0-b4a2-454d-b8c3-3e9ae4a56335 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.563329468Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-8ll88/POD" id=4b53733f-de1b-401e-b9ae-fd8d7ee92769 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.563419101Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.583687006Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-8ll88 Namespace:default ID:80b35a4e647338adcef1ecb89a29c4f60ad1bdb178be2c38c716d71e666be612 UID:68808cac-510b-41cb-a686-dea94b25740b NetNS:/var/run/netns/31c61d43-088d-4ad1-a9a3-db240f2ad5d8 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.583728917Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-8ll88 to CNI network \"kindnet\" (type=ptp)"
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.595977756Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-8ll88 Namespace:default ID:80b35a4e647338adcef1ecb89a29c4f60ad1bdb178be2c38c716d71e666be612 UID:68808cac-510b-41cb-a686-dea94b25740b NetNS:/var/run/netns/31c61d43-088d-4ad1-a9a3-db240f2ad5d8 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.596114991Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-8ll88 for CNI network kindnet (type=ptp)"
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.598287860Z" level=info msg="Ran pod sandbox 80b35a4e647338adcef1ecb89a29c4f60ad1bdb178be2c38c716d71e666be612 with infra container: default/hello-world-app-5d498dc89-8ll88/POD" id=4b53733f-de1b-401e-b9ae-fd8d7ee92769 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.599727742Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c917917b-b7ac-49ae-90bf-2e85d263b843 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.600020285Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=c917917b-b7ac-49ae-90bf-2e85d263b843 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.600633996Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=1503179a-8a4c-4b4e-836b-5b18a5683107 name=/runtime.v1.ImageService/PullImage
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.605570405Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 08 10:39:39 addons-310880 crio[1050]: time="2025-09-08 10:39:39.770746171Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.219457104Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=1503179a-8a4c-4b4e-836b-5b18a5683107 name=/runtime.v1.ImageService/PullImage
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.220327452Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8f6d7d52-565d-4dd0-b83c-fdc42fe50129 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.221107903Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8f6d7d52-565d-4dd0-b83c-fdc42fe50129 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.221996042Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=769a24f2-fa7c-441d-a221-f46f3aeabcf3 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.222647368Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=769a24f2-fa7c-441d-a221-f46f3aeabcf3 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.225709290Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-8ll88/hello-world-app" id=6a1a2b81-55da-454b-94b1-7618d938bd55 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.225833047Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.244813853Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e3346a6372dc745b1b6a131ed86bbfe4f683e05114cef41644c457c4bd097324/merged/etc/passwd: no such file or directory"
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.244867581Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e3346a6372dc745b1b6a131ed86bbfe4f683e05114cef41644c457c4bd097324/merged/etc/group: no such file or directory"
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.301420870Z" level=info msg="Created container d925ffa2212c6ed5d2fd5342c6d9f44da8df8b9aa1301385aa2d38a5f9cf8229: default/hello-world-app-5d498dc89-8ll88/hello-world-app" id=6a1a2b81-55da-454b-94b1-7618d938bd55 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.302175177Z" level=info msg="Starting container: d925ffa2212c6ed5d2fd5342c6d9f44da8df8b9aa1301385aa2d38a5f9cf8229" id=61facdb9-94d6-4e8a-acf1-5345f9302003 name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 10:39:40 addons-310880 crio[1050]: time="2025-09-08 10:39:40.308742903Z" level=info msg="Started container" PID=12277 containerID=d925ffa2212c6ed5d2fd5342c6d9f44da8df8b9aa1301385aa2d38a5f9cf8229 description=default/hello-world-app-5d498dc89-8ll88/hello-world-app id=61facdb9-94d6-4e8a-acf1-5345f9302003 name=/runtime.v1.RuntimeService/StartContainer sandboxID=80b35a4e647338adcef1ecb89a29c4f60ad1bdb178be2c38c716d71e666be612
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	d925ffa2212c6       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   80b35a4e64733       hello-world-app-5d498dc89-8ll88
	768008813d5a0       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   ff14348eb2c7d       nginx
	a3427b8766469       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   6e8ed03f113e9       busybox
	c4216fb72b77c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            2 minutes ago            Running             gadget                    0                   674ad1b5751b3       gadget-dzdkn
	5cdd865e0d3d0       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago            Running             controller                0                   d8070e5a5fe3c       ingress-nginx-controller-9cc49f96f-l9l2p
	d9e1aa25eea48       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago            Exited              patch                     0                   7c94a3f4b1bee       ingress-nginx-admission-patch-svm2c
	ee5437e96348d       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago            Running             minikube-ingress-dns      0                   6e779fe096e42       kube-ingress-dns-minikube
	a2d900c1ff519       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago            Exited              create                    0                   59cf97c675060       ingress-nginx-admission-create-jgscp
	e137a2cebfb22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago            Running             storage-provisioner       0                   2aa2d79516a0e       storage-provisioner
	46aa94e643a9c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             3 minutes ago            Running             coredns                   0                   5f56e0ecf3498       coredns-66bc5c9577-96ndd
	8c4282e00ba52       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             4 minutes ago            Running             kube-proxy                0                   77b43bdd1126e       kube-proxy-rtvsz
	d0a38ba37e56f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                             4 minutes ago            Running             kindnet-cni               0                   835a14f8d1709       kindnet-wnvgd
	f71c651120d1a       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             4 minutes ago            Running             kube-controller-manager   0                   69cd7e41ce795       kube-controller-manager-addons-310880
	361c936972c5b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             4 minutes ago            Running             kube-apiserver            0                   654a291d2f055       kube-apiserver-addons-310880
	d2c23c376d354       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             4 minutes ago            Running             kube-scheduler            0                   40ecd1d7c4c1b       kube-scheduler-addons-310880
	77220972a8d72       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago            Running             etcd                      0                   2eb329ce6e0f4       etcd-addons-310880
	
	
	==> coredns [46aa94e643a9cf65f953737998d4b817d556dff3d42890b12146c133bfdba990] <==
	[INFO] 10.244.0.16:54599 - 37151 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.008586504s
	[INFO] 10.244.0.16:45474 - 29989 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00618468s
	[INFO] 10.244.0.16:45474 - 30367 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006675125s
	[INFO] 10.244.0.16:44964 - 48789 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004705682s
	[INFO] 10.244.0.16:44964 - 48519 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006673673s
	[INFO] 10.244.0.16:56133 - 18536 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000181152s
	[INFO] 10.244.0.16:56133 - 18349 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000227965s
	[INFO] 10.244.0.22:43031 - 51942 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000178167s
	[INFO] 10.244.0.22:33614 - 11017 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000206s
	[INFO] 10.244.0.22:40907 - 26280 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117481s
	[INFO] 10.244.0.22:41501 - 60807 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143594s
	[INFO] 10.244.0.22:33138 - 18281 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125064s
	[INFO] 10.244.0.22:38203 - 27527 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156168s
	[INFO] 10.244.0.22:34202 - 20788 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005080194s
	[INFO] 10.244.0.22:40688 - 7162 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00515284s
	[INFO] 10.244.0.22:51198 - 24521 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007485429s
	[INFO] 10.244.0.22:44975 - 3086 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.008041071s
	[INFO] 10.244.0.22:54095 - 58270 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005253967s
	[INFO] 10.244.0.22:46593 - 43672 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005775684s
	[INFO] 10.244.0.22:53422 - 63962 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005357305s
	[INFO] 10.244.0.22:60374 - 34454 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005412978s
	[INFO] 10.244.0.22:53794 - 61777 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001093415s
	[INFO] 10.244.0.22:46536 - 62933 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002353647s
	[INFO] 10.244.0.27:47241 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000267962s
	[INFO] 10.244.0.27:33114 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00017625s
	
	
	==> describe nodes <==
	Name:               addons-310880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-310880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b5c9e357ec605e3f7a3fbfd5f3e59fa37db6ba2
	                    minikube.k8s.io/name=addons-310880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T10_34_53_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-310880
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 10:34:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-310880
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 10:39:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 10:38:27 +0000   Mon, 08 Sep 2025 10:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 10:38:27 +0000   Mon, 08 Sep 2025 10:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 10:38:27 +0000   Mon, 08 Sep 2025 10:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 10:38:27 +0000   Mon, 08 Sep 2025 10:35:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-310880
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 081d65dfde4e4cc0af99364c2666d1b2
	  System UUID:                918fcb9d-0a3b-43bb-8918-134e5f9ef00d
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  default                     hello-world-app-5d498dc89-8ll88             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-dzdkn                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-l9l2p    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m36s
	  kube-system                 coredns-66bc5c9577-96ndd                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m41s
	  kube-system                 etcd-addons-310880                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m48s
	  kube-system                 kindnet-wnvgd                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m42s
	  kube-system                 kube-apiserver-addons-310880                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-controller-manager-addons-310880       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-proxy-rtvsz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-scheduler-addons-310880                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m37s                  kube-proxy       
	  Warning  CgroupV1                 4m54s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node addons-310880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node addons-310880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m54s (x8 over 4m54s)  kubelet          Node addons-310880 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m48s                  kubelet          Node addons-310880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m48s                  kubelet          Node addons-310880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m48s                  kubelet          Node addons-310880 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m43s                  node-controller  Node addons-310880 event: Registered Node addons-310880 in Controller
	  Normal   NodeReady                3m57s                  kubelet          Node addons-310880 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000003] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001766] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.632194] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023978] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.805671] kauditd_printk_skb: 46 callbacks suppressed
	[Sep 8 10:37] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[  +1.014386] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[  +2.015870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[  +4.095613] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[  +8.187276] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[ +16.130610] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[Sep 8 10:38] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	
	
	==> etcd [77220972a8d7247260078ce17c9783344c01580c804867e8744250b5e8c90324] <==
	{"level":"warn","ts":"2025-09-08T10:35:02.388343Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.531171ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry-creds\" limit:1 ","response":"range_response_count:1 size:7265"}
	{"level":"info","ts":"2025-09-08T10:35:02.478683Z","caller":"traceutil/trace.go:172","msg":"trace[653695821] range","detail":"{range_begin:/registry/deployments/kube-system/registry-creds; range_end:; response_count:1; response_revision:426; }","duration":"201.857769ms","start":"2025-09-08T10:35:02.276792Z","end":"2025-09-08T10:35:02.478650Z","steps":["trace[653695821] 'agreement among raft nodes before linearized reading'  (duration: 106.898182ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:35:02.390578Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.176509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-zk5qg\" limit:1 ","response":"range_response_count:1 size:4357"}
	{"level":"info","ts":"2025-09-08T10:35:02.479175Z","caller":"traceutil/trace.go:172","msg":"trace[1615023218] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-zk5qg; range_end:; response_count:1; response_revision:426; }","duration":"282.774028ms","start":"2025-09-08T10:35:02.196379Z","end":"2025-09-08T10:35:02.479153Z","steps":["trace[1615023218] 'agreement among raft nodes before linearized reading'  (duration: 187.321849ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:35:02.390695Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"208.293162ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-310880\" limit:1 ","response":"range_response_count:1 size:5511"}
	{"level":"info","ts":"2025-09-08T10:35:02.479709Z","caller":"traceutil/trace.go:172","msg":"trace[1318868681] range","detail":"{range_begin:/registry/minions/addons-310880; range_end:; response_count:1; response_revision:426; }","duration":"297.290327ms","start":"2025-09-08T10:35:02.182394Z","end":"2025-09-08T10:35:02.479684Z","steps":["trace[1318868681] 'agreement among raft nodes before linearized reading'  (duration: 201.316885ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:35:02.493121Z","caller":"traceutil/trace.go:172","msg":"trace[613472061] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"195.663478ms","start":"2025-09-08T10:35:02.297434Z","end":"2025-09-08T10:35:02.493098Z","steps":["trace[613472061] 'process raft request'  (duration: 103.562122ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:35:02.494146Z","caller":"traceutil/trace.go:172","msg":"trace[781866319] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"111.829905ms","start":"2025-09-08T10:35:02.382299Z","end":"2025-09-08T10:35:02.494129Z","steps":["trace[781866319] 'process raft request'  (duration: 110.568647ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:35:02.577003Z","caller":"traceutil/trace.go:172","msg":"trace[488282270] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"194.286077ms","start":"2025-09-08T10:35:02.382695Z","end":"2025-09-08T10:35:02.576981Z","steps":["trace[488282270] 'process raft request'  (duration: 111.41628ms)","trace[488282270] 'compare'  (duration: 82.234008ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T10:35:02.577556Z","caller":"traceutil/trace.go:172","msg":"trace[603442572] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"194.229838ms","start":"2025-09-08T10:35:02.383299Z","end":"2025-09-08T10:35:02.577529Z","steps":["trace[603442572] 'process raft request'  (duration: 193.225427ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:35:02.577898Z","caller":"traceutil/trace.go:172","msg":"trace[1536576178] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"193.627866ms","start":"2025-09-08T10:35:02.384251Z","end":"2025-09-08T10:35:02.577879Z","steps":["trace[1536576178] 'process raft request'  (duration: 192.411438ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:35:02.578143Z","caller":"traceutil/trace.go:172","msg":"trace[721732420] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"177.675903ms","start":"2025-09-08T10:35:02.400449Z","end":"2025-09-08T10:35:02.578125Z","steps":["trace[721732420] 'process raft request'  (duration: 176.357516ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:35:02.578183Z","caller":"traceutil/trace.go:172","msg":"trace[402883479] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"177.781164ms","start":"2025-09-08T10:35:02.400386Z","end":"2025-09-08T10:35:02.578168Z","steps":["trace[402883479] 'process raft request'  (duration: 176.356651ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T10:35:02.579469Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"398.816901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:35:02.584455Z","caller":"traceutil/trace.go:172","msg":"trace[1035930839] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:426; }","duration":"404.00892ms","start":"2025-09-08T10:35:02.180424Z","end":"2025-09-08T10:35:02.584433Z","steps":["trace[1035930839] 'agreement among raft nodes before linearized reading'  (duration: 203.303028ms)","trace[1035930839] 'range keys from in-memory index tree'  (duration: 195.030851ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T10:35:02.584531Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T10:35:02.180392Z","time spent":"404.10942ms","remote":"127.0.0.1:43102","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/kube-system/registry\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T10:35:06.838078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:35:06.845145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:35:27.806879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:35:27.814634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:35:27.882381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:35:27.887666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:36:34.310998Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.679743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T10:36:34.311083Z","caller":"traceutil/trace.go:172","msg":"trace[1323519950] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1215; }","duration":"107.783765ms","start":"2025-09-08T10:36:34.203281Z","end":"2025-09-08T10:36:34.311065Z","steps":["trace[1323519950] 'agreement among raft nodes before linearized reading'  (duration: 93.768428ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T10:36:34.311151Z","caller":"traceutil/trace.go:172","msg":"trace[1426296721] transaction","detail":"{read_only:false; response_revision:1216; number_of_response:1; }","duration":"147.005417ms","start":"2025-09-08T10:36:34.164119Z","end":"2025-09-08T10:36:34.311125Z","steps":["trace[1426296721] 'process raft request'  (duration: 133.009256ms)","trace[1426296721] 'compare'  (duration: 13.833506ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:39:40 up  1:22,  0 users,  load average: 0.63, 29.70, 80.01
	Linux addons-310880 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d0a38ba37e56ff92255566a01228736a8b1e9138f494b14acbc09fc5c55e3a9d] <==
	I0908 10:37:32.597568       1 main.go:301] handling current node
	I0908 10:37:42.597562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:37:42.597602       1 main.go:301] handling current node
	I0908 10:37:52.597995       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:37:52.598042       1 main.go:301] handling current node
	I0908 10:38:02.597326       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:38:02.597373       1 main.go:301] handling current node
	I0908 10:38:12.598323       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:38:12.598369       1 main.go:301] handling current node
	I0908 10:38:22.599768       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:38:22.599805       1 main.go:301] handling current node
	I0908 10:38:32.597954       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:38:32.598002       1 main.go:301] handling current node
	I0908 10:38:42.598061       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:38:42.598112       1 main.go:301] handling current node
	I0908 10:38:52.599759       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:38:52.599799       1 main.go:301] handling current node
	I0908 10:39:02.599753       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:39:02.599792       1 main.go:301] handling current node
	I0908 10:39:12.601572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:39:12.601608       1 main.go:301] handling current node
	I0908 10:39:22.600055       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:39:22.600094       1 main.go:301] handling current node
	I0908 10:39:32.603810       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:39:32.603867       1 main.go:301] handling current node
	
	
	==> kube-apiserver [361c936972c5b4e73f807dd6da21c135a80a62449a662741c46c7c36e2dc0f11] <==
	I0908 10:37:20.394974       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.57.254"}
	I0908 10:37:42.187937       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:37:43.605557       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0908 10:38:03.466021       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0908 10:38:03.471933       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0908 10:38:03.477840       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0908 10:38:09.420082       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0908 10:38:10.190736       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 10:38:10.190797       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 10:38:10.206355       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 10:38:10.206412       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 10:38:10.211645       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 10:38:10.211725       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 10:38:10.221476       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 10:38:10.221626       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0908 10:38:10.295642       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	I0908 10:38:10.392490       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 10:38:10.392636       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0908 10:38:11.211954       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0908 10:38:11.399194       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0908 10:38:11.400137       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0908 10:38:18.478436       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0908 10:38:23.010107       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:38:42.742115       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:39:39.384925       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.17.97"}
	
	
	==> kube-controller-manager [f71c651120d1a5eacae76e8f07669bbb898e43c043a1aafc5047f8fadbc374cb] <==
	E0908 10:38:20.899893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:38:21.860180       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:38:21.861241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:38:27.597066       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:38:27.598092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0908 10:38:28.019494       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0908 10:38:28.019540       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 10:38:28.019588       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0908 10:38:28.019618       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E0908 10:38:28.130263       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:38:28.131465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:38:31.041216       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:38:31.042426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:38:42.577912       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:38:42.578957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:38:44.260304       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:38:44.261476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:38:52.787195       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:38:52.788437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:39:11.386725       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:39:11.387675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:39:24.618389       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:39:24.619604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 10:39:37.191056       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 10:39:37.192208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [8c4282e00ba52dfcd64f73b71f7088a05bbfcc223911cff1439d8b7618efa88e] <==
	I0908 10:35:02.186513       1 server_linux.go:53] "Using iptables proxy"
	I0908 10:35:03.279694       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 10:35:03.384408       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 10:35:03.385694       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 10:35:03.385959       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 10:35:03.794691       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 10:35:03.795076       1 server_linux.go:132] "Using iptables Proxier"
	I0908 10:35:03.881637       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 10:35:03.889844       1 server.go:527] "Version info" version="v1.34.0"
	I0908 10:35:03.890024       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:35:03.893915       1 config.go:309] "Starting node config controller"
	I0908 10:35:03.894023       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 10:35:03.894060       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 10:35:03.894008       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 10:35:03.894109       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 10:35:03.893982       1 config.go:200] "Starting service config controller"
	I0908 10:35:03.893998       1 config.go:106] "Starting endpoint slice config controller"
	I0908 10:35:03.894154       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 10:35:03.894135       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 10:35:03.995160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 10:35:03.995507       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 10:35:03.995599       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d2c23c376d354c90e8b5618cf29ed81a9eea877184ff1cd3900a7eb74ee84ad5] <==
	E0908 10:34:50.297583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 10:34:50.297633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 10:34:50.297657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 10:34:50.297703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 10:34:50.297743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 10:34:50.297786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 10:34:50.297832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 10:34:50.297832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 10:34:50.297913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 10:34:50.297967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 10:34:50.298142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 10:34:50.298173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 10:34:50.298202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 10:34:50.298241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 10:34:51.130807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 10:34:51.253272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 10:34:51.346234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 10:34:51.378168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 10:34:51.378192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 10:34:51.382601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 10:34:51.401362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 10:34:51.408590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 10:34:51.477886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 10:34:51.481091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I0908 10:34:51.792922       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 10:38:52 addons-310880 kubelet[1680]: E0908 10:38:52.753976    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/abc1e2cd142fad2ae2de9974c73369378f0c4e26ce8440bfe13723be3beff966/diff" to get inode usage: stat /var/lib/containers/storage/overlay/abc1e2cd142fad2ae2de9974c73369378f0c4e26ce8440bfe13723be3beff966/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 10:38:52 addons-310880 kubelet[1680]: E0908 10:38:52.755110    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2c235e58a6dd19d12ee2f720e1c42b62c2e634c58b61fbe68bc9b85984c55177/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2c235e58a6dd19d12ee2f720e1c42b62c2e634c58b61fbe68bc9b85984c55177/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 10:38:52 addons-310880 kubelet[1680]: E0908 10:38:52.776631    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/48af5fcee20fc9aa5aadb11cdee6dc43b2e0a81274ea9bfe297efe70bd5e4237/diff" to get inode usage: stat /var/lib/containers/storage/overlay/48af5fcee20fc9aa5aadb11cdee6dc43b2e0a81274ea9bfe297efe70bd5e4237/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 10:38:52 addons-310880 kubelet[1680]: E0908 10:38:52.781251    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b455d546910542b8597afc3d693fd985b78acaebb0cefd6f7cf831d3620d8cd7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b455d546910542b8597afc3d693fd985b78acaebb0cefd6f7cf831d3620d8cd7/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 10:38:52 addons-310880 kubelet[1680]: E0908 10:38:52.783460    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/48af5fcee20fc9aa5aadb11cdee6dc43b2e0a81274ea9bfe297efe70bd5e4237/diff" to get inode usage: stat /var/lib/containers/storage/overlay/48af5fcee20fc9aa5aadb11cdee6dc43b2e0a81274ea9bfe297efe70bd5e4237/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 10:38:52 addons-310880 kubelet[1680]: E0908 10:38:52.794646    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5ab7ab9cf769d1f56ec7aca6e3f7b09aae2712175d6458e03389a282adaaa800/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5ab7ab9cf769d1f56ec7aca6e3f7b09aae2712175d6458e03389a282adaaa800/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 10:38:52 addons-310880 kubelet[1680]: E0908 10:38:52.799126    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/23e2692fe7371326ae2fc8e1f2cb1a49680df010f68591168f09c0a21d8bcbd7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/23e2692fe7371326ae2fc8e1f2cb1a49680df010f68591168f09c0a21d8bcbd7/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 10:38:52 addons-310880 kubelet[1680]: E0908 10:38:52.806996    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5ab7ab9cf769d1f56ec7aca6e3f7b09aae2712175d6458e03389a282adaaa800/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5ab7ab9cf769d1f56ec7aca6e3f7b09aae2712175d6458e03389a282adaaa800/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 10:38:52 addons-310880 kubelet[1680]: E0908 10:38:52.813603    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/65b68a0eb9c490e81bfdb377257581f9a7a4bdfb3081304f9ba6ffc6d6b5614a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/65b68a0eb9c490e81bfdb377257581f9a7a4bdfb3081304f9ba6ffc6d6b5614a/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 10:38:52 addons-310880 kubelet[1680]: E0908 10:38:52.837176    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327932836909547  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 10:38:52 addons-310880 kubelet[1680]: E0908 10:38:52.837210    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327932836909547  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 10:38:53 addons-310880 kubelet[1680]: I0908 10:38:53.205141    1680 scope.go:117] "RemoveContainer" containerID="e9ac2c3cf066742ef7a12e910eb923a40fb7f8f90c4ccc795a3d3da86239e9bd"
	Sep 08 10:38:53 addons-310880 kubelet[1680]: I0908 10:38:53.223739    1680 scope.go:117] "RemoveContainer" containerID="3bae940ae5a38043d80d797496e6ac33ae4a52fb2fa3f892b733948af92f5b89"
	Sep 08 10:38:53 addons-310880 kubelet[1680]: I0908 10:38:53.240826    1680 scope.go:117] "RemoveContainer" containerID="96ecc932c62a3f97dffd227b238af224b743964a8eb9eef191264b55d42af9aa"
	Sep 08 10:39:02 addons-310880 kubelet[1680]: E0908 10:39:02.839002    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327942838742994  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 10:39:02 addons-310880 kubelet[1680]: E0908 10:39:02.839038    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327942838742994  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 10:39:12 addons-310880 kubelet[1680]: E0908 10:39:12.841546    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327952841224789  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 10:39:12 addons-310880 kubelet[1680]: E0908 10:39:12.841583    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327952841224789  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 10:39:22 addons-310880 kubelet[1680]: E0908 10:39:22.844648    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327962844383513  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 10:39:22 addons-310880 kubelet[1680]: E0908 10:39:22.844681    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327962844383513  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 10:39:32 addons-310880 kubelet[1680]: E0908 10:39:32.846996    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757327972846716734  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 10:39:32 addons-310880 kubelet[1680]: E0908 10:39:32.847028    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757327972846716734  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 10:39:39 addons-310880 kubelet[1680]: I0908 10:39:39.335447    1680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm4xh\" (UniqueName: \"kubernetes.io/projected/68808cac-510b-41cb-a686-dea94b25740b-kube-api-access-qm4xh\") pod \"hello-world-app-5d498dc89-8ll88\" (UID: \"68808cac-510b-41cb-a686-dea94b25740b\") " pod="default/hello-world-app-5d498dc89-8ll88"
	Sep 08 10:39:39 addons-310880 kubelet[1680]: W0908 10:39:39.597768    1680 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8ab1d2a445a20842d50f8f0b08b3a0b8e457c91eaf4210d8b64be1af132f3c8c/crio-80b35a4e647338adcef1ecb89a29c4f60ad1bdb178be2c38c716d71e666be612 WatchSource:0}: Error finding container 80b35a4e647338adcef1ecb89a29c4f60ad1bdb178be2c38c716d71e666be612: Status 404 returned error can't find the container with id 80b35a4e647338adcef1ecb89a29c4f60ad1bdb178be2c38c716d71e666be612
	Sep 08 10:39:39 addons-310880 kubelet[1680]: I0908 10:39:39.689711    1680 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [e137a2cebfb221a7ed07cbdd914823c226254e3ffb8a8a69d21378b508f270fd] <==
	W0908 10:39:15.609988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:17.613205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:17.617408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:19.621177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:19.626143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:21.629452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:21.634059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:23.637562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:23.643148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:25.646431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:25.650117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:27.653248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:27.659040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:29.662122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:29.666747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:31.669803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:31.674838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:33.677701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:33.681746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:35.685937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:35.690670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:37.693786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:37.698099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:39.701161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:39:39.708413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-310880 -n addons-310880
helpers_test.go:269: (dbg) Run:  kubectl --context addons-310880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-jgscp ingress-nginx-admission-patch-svm2c
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-310880 describe pod ingress-nginx-admission-create-jgscp ingress-nginx-admission-patch-svm2c
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-310880 describe pod ingress-nginx-admission-create-jgscp ingress-nginx-admission-patch-svm2c: exit status 1 (61.530791ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jgscp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-svm2c" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-310880 describe pod ingress-nginx-admission-create-jgscp ingress-nginx-admission-patch-svm2c: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-310880 addons disable ingress-dns --alsologtostderr -v=1: (1.480657542s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-310880 addons disable ingress --alsologtostderr -v=1: (7.751462619s)
--- FAIL: TestAddons/parallel/Ingress (151.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-548498 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-548498 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-kttfw" [c452c08d-1f0e-48c8-870e-f45054aece58] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-548498 -n functional-548498
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-08 10:53:40.327850063 +0000 UTC m=+1183.059948095
functional_test.go:1645: (dbg) Run:  kubectl --context functional-548498 describe po hello-node-connect-7d85dfc575-kttfw -n default
functional_test.go:1645: (dbg) kubectl --context functional-548498 describe po hello-node-connect-7d85dfc575-kttfw -n default:
Name:             hello-node-connect-7d85dfc575-kttfw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-548498/192.168.49.2
Start Time:       Mon, 08 Sep 2025 10:43:39 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rt7rh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rt7rh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-kttfw to functional-548498
Normal   Pulling    6m52s (x5 over 9m57s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 9m57s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m52s (x5 over 9m57s)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-548498 logs hello-node-connect-7d85dfc575-kttfw -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-548498 logs hello-node-connect-7d85dfc575-kttfw -n default: exit status 1 (73.626334ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-kttfw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-548498 logs hello-node-connect-7d85dfc575-kttfw -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-548498 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-kttfw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-548498/192.168.49.2
Start Time:       Mon, 08 Sep 2025 10:43:39 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rt7rh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rt7rh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-kttfw to functional-548498
Normal   Pulling    6m52s (x5 over 9m57s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 9m57s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m52s (x5 over 9m57s)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-548498 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-548498 logs -l app=hello-node-connect: exit status 1 (69.152526ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-kttfw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-548498 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-548498 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.39.143
IPs:                      10.96.39.143
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30864/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-548498
helpers_test.go:243: (dbg) docker inspect functional-548498:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7da1d7fb2bd29696e491dc431e3cf8ae865b27dcaebe7cb692a1fd66d1e71e74",
	        "Created": "2025-09-08T10:40:51.145063032Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 289656,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T10:40:51.176469128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/7da1d7fb2bd29696e491dc431e3cf8ae865b27dcaebe7cb692a1fd66d1e71e74/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7da1d7fb2bd29696e491dc431e3cf8ae865b27dcaebe7cb692a1fd66d1e71e74/hostname",
	        "HostsPath": "/var/lib/docker/containers/7da1d7fb2bd29696e491dc431e3cf8ae865b27dcaebe7cb692a1fd66d1e71e74/hosts",
	        "LogPath": "/var/lib/docker/containers/7da1d7fb2bd29696e491dc431e3cf8ae865b27dcaebe7cb692a1fd66d1e71e74/7da1d7fb2bd29696e491dc431e3cf8ae865b27dcaebe7cb692a1fd66d1e71e74-json.log",
	        "Name": "/functional-548498",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-548498:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-548498",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7da1d7fb2bd29696e491dc431e3cf8ae865b27dcaebe7cb692a1fd66d1e71e74",
	                "LowerDir": "/var/lib/docker/overlay2/c36be1225a0a5be4e55d22b5b7ce187c7326624b0f9687e6a7e3e14031f65334-init/diff:/var/lib/docker/overlay2/42ba3aa56f0a82ca44fc0cd64f44c2376737b78d7d73ce4114d5dbec5843e84a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c36be1225a0a5be4e55d22b5b7ce187c7326624b0f9687e6a7e3e14031f65334/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c36be1225a0a5be4e55d22b5b7ce187c7326624b0f9687e6a7e3e14031f65334/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c36be1225a0a5be4e55d22b5b7ce187c7326624b0f9687e6a7e3e14031f65334/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-548498",
	                "Source": "/var/lib/docker/volumes/functional-548498/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-548498",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-548498",
	                "name.minikube.sigs.k8s.io": "functional-548498",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e247f1618fd1f210243a38b592375c343433fd8305ad620380c24afd33db832",
	            "SandboxKey": "/var/run/docker/netns/6e247f1618fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-548498": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:f2:44:cd:6d:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2ff403a6fb7e0c4992105de6a750fe0575c7670a59126c1f319d6fecca7dbbc5",
	                    "EndpointID": "a66bd0f8dff8d863d33eaca38c0fd47d33a76ed6cb509ebce1eec9a590cb8bce",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-548498",
	                        "7da1d7fb2bd2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-548498 -n functional-548498
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 logs -n 25: (1.539856531s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-548498 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │                     │
	│ start          │ -p functional-548498 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │                     │
	│ start          │ -p functional-548498 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │                     │
	│ ssh            │ functional-548498 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │ 08 Sep 25 10:43 UTC │
	│ start          │ -p functional-548498 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-548498 --alsologtostderr -v=1                                                     │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │ 08 Sep 25 10:44 UTC │
	│ ssh            │ functional-548498 ssh -- ls -la /mount-9p                                                                          │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │ 08 Sep 25 10:43 UTC │
	│ ssh            │ functional-548498 ssh sudo umount -f /mount-9p                                                                     │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │                     │
	│ mount          │ -p functional-548498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1206926474/001:/mount2 --alsologtostderr -v=1 │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │                     │
	│ mount          │ -p functional-548498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1206926474/001:/mount1 --alsologtostderr -v=1 │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │                     │
	│ mount          │ -p functional-548498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1206926474/001:/mount3 --alsologtostderr -v=1 │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │                     │
	│ ssh            │ functional-548498 ssh findmnt -T /mount1                                                                           │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │ 08 Sep 25 10:43 UTC │
	│ ssh            │ functional-548498 ssh findmnt -T /mount2                                                                           │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │ 08 Sep 25 10:43 UTC │
	│ ssh            │ functional-548498 ssh findmnt -T /mount3                                                                           │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │ 08 Sep 25 10:43 UTC │
	│ mount          │ -p functional-548498 --kill=true                                                                                   │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:43 UTC │                     │
	│ update-context │ functional-548498 update-context --alsologtostderr -v=2                                                            │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ update-context │ functional-548498 update-context --alsologtostderr -v=2                                                            │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ update-context │ functional-548498 update-context --alsologtostderr -v=2                                                            │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-548498 image ls --format short --alsologtostderr                                                        │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-548498 image ls --format yaml --alsologtostderr                                                         │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ ssh            │ functional-548498 ssh pgrep buildkitd                                                                              │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │                     │
	│ image          │ functional-548498 image build -t localhost/my-image:functional-548498 testdata/build --alsologtostderr             │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-548498 image ls                                                                                         │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-548498 image ls --format json --alsologtostderr                                                         │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	│ image          │ functional-548498 image ls --format table --alsologtostderr                                                        │ functional-548498 │ jenkins │ v1.36.0 │ 08 Sep 25 10:44 UTC │ 08 Sep 25 10:44 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 10:43:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 10:43:57.962046  305814 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:43:57.962342  305814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:57.962358  305814 out.go:374] Setting ErrFile to fd 2...
	I0908 10:43:57.962364  305814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:57.962593  305814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
	I0908 10:43:57.963163  305814 out.go:368] Setting JSON to false
	I0908 10:43:57.964318  305814 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5182,"bootTime":1757323056,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:43:57.964436  305814 start.go:140] virtualization: kvm guest
	I0908 10:43:57.966462  305814 out.go:179] * [functional-548498] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 10:43:57.968668  305814 notify.go:220] Checking for updates...
	I0908 10:43:57.968727  305814 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 10:43:57.969948  305814 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:43:57.971263  305814 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	I0908 10:43:57.972569  305814 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	I0908 10:43:57.973841  305814 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 10:43:57.975424  305814 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 10:43:57.977322  305814 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:43:57.978106  305814 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:43:58.008178  305814 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 10:43:58.008280  305814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 10:43:58.075592  305814 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 10:43:58.064283018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 10:43:58.075775  305814 docker.go:318] overlay module found
	I0908 10:43:58.078326  305814 out.go:179] * Using the docker driver based on existing profile
	I0908 10:43:58.079991  305814 start.go:304] selected driver: docker
	I0908 10:43:58.080016  305814 start.go:918] validating driver "docker" against &{Name:functional-548498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-548498 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:43:58.080144  305814 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 10:43:58.080230  305814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 10:43:58.134581  305814 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 10:43:58.124649934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 10:43:58.135259  305814 cni.go:84] Creating CNI manager for ""
	I0908 10:43:58.135319  305814 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 10:43:58.135368  305814 start.go:348] cluster config:
	{Name:functional-548498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-548498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:43:58.137899  305814 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 08 10:44:03 functional-548498 crio[5528]: time="2025-09-08 10:44:03.851174218Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 10:44:03 functional-548498 crio[5528]: time="2025-09-08 10:44:03.866274020Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4cbeb9b3ccc1b159a8f21b45c89e1abca22d791853ae5ee67eae6f4bc23305ee/merged/etc/group: no such file or directory"
	Sep 08 10:44:03 functional-548498 crio[5528]: time="2025-09-08 10:44:03.902918425Z" level=info msg="Created container ce184ddae03e54b03ef64260f932a1ea36c4d2bc83dd1cb6d1e985ce894c7497: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tfd52/kubernetes-dashboard" id=8c43c238-21b3-4485-9f98-36bde84119ca name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 10:44:03 functional-548498 crio[5528]: time="2025-09-08 10:44:03.903744688Z" level=info msg="Starting container: ce184ddae03e54b03ef64260f932a1ea36c4d2bc83dd1cb6d1e985ce894c7497" id=accff7c5-0170-40b8-8d34-d909f665bcec name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 10:44:03 functional-548498 crio[5528]: time="2025-09-08 10:44:03.909922228Z" level=info msg="Started container" PID=9653 containerID=ce184ddae03e54b03ef64260f932a1ea36c4d2bc83dd1cb6d1e985ce894c7497 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tfd52/kubernetes-dashboard id=accff7c5-0170-40b8-8d34-d909f665bcec name=/runtime.v1.RuntimeService/StartContainer sandboxID=d97322b3beb25bc355f23a6f484746a8be02f585e6471841f24309d8e28101dd
	Sep 08 10:44:03 functional-548498 crio[5528]: time="2025-09-08 10:44:03.982637817Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 08 10:44:05 functional-548498 crio[5528]: time="2025-09-08 10:44:05.384407360Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a" id=96cf60f2-02e8-42df-975d-4367d100942e name=/runtime.v1.ImageService/PullImage
	Sep 08 10:44:05 functional-548498 crio[5528]: time="2025-09-08 10:44:05.385036081Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=26ce879a-1bdc-4036-a0d1-120296bb3fe4 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 10:44:05 functional-548498 crio[5528]: time="2025-09-08 10:44:05.385776540Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c],Size_:43824855,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=26ce879a-1bdc-4036-a0d1-120296bb3fe4 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 10:44:05 functional-548498 crio[5528]: time="2025-09-08 10:44:05.386502818Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=55dd672c-edad-489e-a0e6-9b8d5db0bfed name=/runtime.v1.ImageService/ImageStatus
	Sep 08 10:44:05 functional-548498 crio[5528]: time="2025-09-08 10:44:05.387186907Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c],Size_:43824855,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=55dd672c-edad-489e-a0e6-9b8d5db0bfed name=/runtime.v1.ImageService/ImageStatus
	Sep 08 10:44:05 functional-548498 crio[5528]: time="2025-09-08 10:44:05.390192611Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-t8p9j/dashboard-metrics-scraper" id=51d0a4f5-4693-44c0-939f-c84916e988e3 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 10:44:05 functional-548498 crio[5528]: time="2025-09-08 10:44:05.390283070Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 10:44:05 functional-548498 crio[5528]: time="2025-09-08 10:44:05.403247805Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7db3cdd90ddc285b88bc3305ecaf630fb04910fc30957be1857cf496100b35f7/merged/etc/group: no such file or directory"
	Sep 08 10:44:05 functional-548498 crio[5528]: time="2025-09-08 10:44:05.444653456Z" level=info msg="Created container 5acdccbafba9d81613a163f257190f35b5764177a1daa9c76d6a36084dc96f55: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-t8p9j/dashboard-metrics-scraper" id=51d0a4f5-4693-44c0-939f-c84916e988e3 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 10:44:05 functional-548498 crio[5528]: time="2025-09-08 10:44:05.445354816Z" level=info msg="Starting container: 5acdccbafba9d81613a163f257190f35b5764177a1daa9c76d6a36084dc96f55" id=45364d15-21e8-4dd4-9663-9cc840839e65 name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 10:44:05 functional-548498 crio[5528]: time="2025-09-08 10:44:05.452784651Z" level=info msg="Started container" PID=9882 containerID=5acdccbafba9d81613a163f257190f35b5764177a1daa9c76d6a36084dc96f55 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-t8p9j/dashboard-metrics-scraper id=45364d15-21e8-4dd4-9663-9cc840839e65 name=/runtime.v1.RuntimeService/StartContainer sandboxID=192ec4e7742768e6f447e549a7a372f65a3643546dbff9cb6a2fd7a3654f6b7c
	Sep 08 10:44:24 functional-548498 crio[5528]: time="2025-09-08 10:44:24.904127575Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=18823448-2f61-464e-9830-9db61f06e0a1 name=/runtime.v1.ImageService/PullImage
	Sep 08 10:44:25 functional-548498 crio[5528]: time="2025-09-08 10:44:25.904307689Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=71ce8b9e-beed-43fd-8373-da34d81de86f name=/runtime.v1.ImageService/PullImage
	Sep 08 10:45:14 functional-548498 crio[5528]: time="2025-09-08 10:45:14.904469238Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8cd45e77-be94-4445-9d88-c1611e0c40eb name=/runtime.v1.ImageService/PullImage
	Sep 08 10:45:16 functional-548498 crio[5528]: time="2025-09-08 10:45:16.904259461Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a9afb48e-b357-427d-bab4-c4c53b7eed81 name=/runtime.v1.ImageService/PullImage
	Sep 08 10:46:45 functional-548498 crio[5528]: time="2025-09-08 10:46:45.904751933Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=847fe51e-1653-45cd-a532-104b5bf2ad88 name=/runtime.v1.ImageService/PullImage
	Sep 08 10:46:48 functional-548498 crio[5528]: time="2025-09-08 10:46:48.904530537Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1bf8829e-384f-413a-a2e2-1a9e6b8e6e30 name=/runtime.v1.ImageService/PullImage
	Sep 08 10:49:35 functional-548498 crio[5528]: time="2025-09-08 10:49:35.904233026Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d6423cdf-3bd5-4b21-a508-732335b47c3d name=/runtime.v1.ImageService/PullImage
	Sep 08 10:49:36 functional-548498 crio[5528]: time="2025-09-08 10:49:36.904509946Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fd1f2e0a-f0cb-4ac6-8aec-8938768cddc6 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	5acdccbafba9d       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   192ec4e774276       dashboard-metrics-scraper-77bf4d6c4c-t8p9j
	ce184ddae03e5       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 minutes ago       Running             kubernetes-dashboard        0                   d97322b3beb25       kubernetes-dashboard-855c9754f9-tfd52
	9b7723a72e2d5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              9 minutes ago       Exited              mount-munger                0                   259297c91508e       busybox-mount
	7d6d819ae5bf7       docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57                  9 minutes ago       Running             myfrontend                  0                   b9095f91e161b       sp-pod
	8fc38cfe03c13       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                  10 minutes ago      Running             nginx                       0                   c47ad4c78f728       nginx-svc
	2df63895db143       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  10 minutes ago      Running             mysql                       0                   39a9b2ded3a72       mysql-5bb876957f-xb8wk
	dc3318d8d8f8c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     3                   1a88f18b0119d       coredns-66bc5c9577-zbq66
	3b5ec737736fe       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 3                   c21e20bb44a83       kindnet-sfp4m
	46a87998e33d2       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 10 minutes ago      Running             kube-proxy                  3                   a2f8382eba213       kube-proxy-4r5qk
	0e7c94a01ef04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   e354c401f3368       storage-provisioner
	a29d09e7ada54       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 10 minutes ago      Running             kube-apiserver              0                   19a07744f0a0a       kube-apiserver-functional-548498
	81dc5e7673193       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 10 minutes ago      Running             kube-scheduler              3                   f9474e98d8149       kube-scheduler-functional-548498
	6e211a68d6775       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 10 minutes ago      Running             kube-controller-manager     3                   bf478c3ae9587       kube-controller-manager-functional-548498
	d1d4536420c79       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        3                   72e6f9a3ffc04       etcd-functional-548498
	779a302a5c579       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     2                   1a88f18b0119d       coredns-66bc5c9577-zbq66
	940c868ab86e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         2                   e354c401f3368       storage-provisioner
	362d1cccf89ca       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 11 minutes ago      Exited              kube-controller-manager     2                   bf478c3ae9587       kube-controller-manager-functional-548498
	66a4045017723       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        2                   72e6f9a3ffc04       etcd-functional-548498
	47881d04d0f03       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 2                   c21e20bb44a83       kindnet-sfp4m
	44a94b3c47584       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 11 minutes ago      Exited              kube-proxy                  2                   a2f8382eba213       kube-proxy-4r5qk
	e0702aa8a9311       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 11 minutes ago      Exited              kube-scheduler              2                   f9474e98d8149       kube-scheduler-functional-548498
	
	
	==> coredns [779a302a5c579a03509efbe17fb1ef65ced75564c32143db30b72a62ea749f61] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51518 - 462 "HINFO IN 9041528763454103751.4444445137301313146. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.091807552s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dc3318d8d8f8cc77d631126ef844cc9eea1c1d1bf1b04ff9248f86b7c9be0ff5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34203 - 13777 "HINFO IN 1492955300587475073.5960849898354094801. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024188439s
	
	
	==> describe nodes <==
	Name:               functional-548498
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-548498
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b5c9e357ec605e3f7a3fbfd5f3e59fa37db6ba2
	                    minikube.k8s.io/name=functional-548498
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T10_41_07_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 10:41:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-548498
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 10:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 10:53:15 +0000   Mon, 08 Sep 2025 10:41:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 10:53:15 +0000   Mon, 08 Sep 2025 10:41:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 10:53:15 +0000   Mon, 08 Sep 2025 10:41:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 10:53:15 +0000   Mon, 08 Sep 2025 10:41:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-548498
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8a376ae62934799910740a2f825e1f5
	  System UUID:                c649988b-1703-4502-b38a-aeefecbc0820
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7rvb5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  default                     hello-node-connect-7d85dfc575-kttfw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-xb8wk                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-66bc5c9577-zbq66                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-548498                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-sfp4m                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-548498              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-548498     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4r5qk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-548498              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-t8p9j    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tfd52         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-548498 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-548498 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-548498 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-548498 event: Registered Node functional-548498 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-548498 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-548498 event: Registered Node functional-548498 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-548498 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-548498 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-548498 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-548498 event: Registered Node functional-548498 in Controller
	
	
	==> dmesg <==
	[  +0.632194] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023978] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.805671] kauditd_printk_skb: 46 callbacks suppressed
	[Sep 8 10:37] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[  +1.014386] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[  +2.015870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[  +4.095613] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[  +8.187276] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[ +16.130610] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[Sep 8 10:38] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e2 3f a7 fc e7 96 a6 6c 2a b9 6c 76 08 00
	[Sep 8 10:43] FS-Cache: Duplicate cookie detected
	[  +0.004723] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006767] FS-Cache: O-cookie d=000000009b95eadb{9P.session} n=0000000098502e1c
	[  +0.007583] FS-Cache: O-key=[10] '34323936313837393734'
	[  +0.005512] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.007971] FS-Cache: N-cookie d=000000009b95eadb{9P.session} n=000000007a33e68b
	[  +0.008929] FS-Cache: N-key=[10] '34323936313837393734'
	[Sep 8 10:44] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [66a4045017723b84028133f74bbb36273693deaa923d91a8c37379cd67f12447] <==
	{"level":"warn","ts":"2025-09-08T10:42:19.307370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:19.320765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:19.378995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:19.404027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:19.408398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:19.416609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:42:19.423513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54832","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T10:42:48.796473Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T10:42:48.796579Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-548498","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-08T10:42:48.796688Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T10:42:48.929833Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T10:42:48.931409Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T10:42:48.931466Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-08T10:42:48.931524Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-08T10:42:48.931533Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-08T10:42:48.931524Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T10:42:48.931564Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T10:42:48.931582Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-08T10:42:48.931526Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T10:42:48.931610Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T10:42:48.931622Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T10:42:48.934478Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-08T10:42:48.934564Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T10:42:48.934593Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-08T10:42:48.934598Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-548498","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [d1d4536420c794f7f8c5e7157a010f4728ba77f5c38bd84bd39d3b074d9131da] <==
	{"level":"warn","ts":"2025-09-08T10:43:00.418240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.478967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.486898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.494461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.502781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.509333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.527962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.580032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.589908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.597180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.604153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.611424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.618938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.681569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.688602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.702875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.709290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.716910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.780264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.786786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.795587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T10:43:00.889219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40260","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T10:52:59.736647Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1220}
	{"level":"info","ts":"2025-09-08T10:52:59.759356Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1220,"took":"22.317381ms","hash":2237239829,"current-db-size-bytes":3653632,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1720320,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-09-08T10:52:59.759439Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2237239829,"revision":1220,"compact-revision":-1}
	
	
	==> kernel <==
	 10:53:42 up  1:36,  0 users,  load average: 0.24, 1.97, 32.53
	Linux functional-548498 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [3b5ec737736fee177ff8ebccc144fb952d45945e8213d4493e2d1f2201b081d5] <==
	I0908 10:51:33.079807       1 main.go:301] handling current node
	I0908 10:51:43.087804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:51:43.087850       1 main.go:301] handling current node
	I0908 10:51:53.087728       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:51:53.087778       1 main.go:301] handling current node
	I0908 10:52:03.078754       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:52:03.078793       1 main.go:301] handling current node
	I0908 10:52:13.079764       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:52:13.079828       1 main.go:301] handling current node
	I0908 10:52:23.087737       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:52:23.087773       1 main.go:301] handling current node
	I0908 10:52:33.081255       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:52:33.081311       1 main.go:301] handling current node
	I0908 10:52:43.085129       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:52:43.085176       1 main.go:301] handling current node
	I0908 10:52:53.087774       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:52:53.087809       1 main.go:301] handling current node
	I0908 10:53:03.080100       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:53:03.080141       1 main.go:301] handling current node
	I0908 10:53:13.079802       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:53:13.079838       1 main.go:301] handling current node
	I0908 10:53:23.083776       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:53:23.083838       1 main.go:301] handling current node
	I0908 10:53:33.078801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:53:33.078857       1 main.go:301] handling current node
	
	
	==> kindnet [47881d04d0f0377dbba872150bf09ab200c58edc0c2e6b9e0145f881349e37bf] <==
	I0908 10:42:16.983448       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 10:42:16.983859       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0908 10:42:16.984077       1 main.go:148] setting mtu 1500 for CNI 
	I0908 10:42:16.984129       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 10:42:16.984183       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T10:42:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E0908 10:42:17.293243       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0908 10:42:17.293709       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 10:42:17.293734       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 10:42:17.293750       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 10:42:17.294154       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0908 10:42:17.476225       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0908 10:42:17.476929       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0908 10:42:20.595767       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 10:42:20.677538       1 metrics.go:72] Registering metrics
	I0908 10:42:20.677759       1 controller.go:711] "Syncing nftables rules"
	I0908 10:42:27.292321       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:42:27.292412       1 main.go:301] handling current node
	I0908 10:42:37.293741       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:42:37.293801       1 main.go:301] handling current node
	I0908 10:42:47.299763       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 10:42:47.299813       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a29d09e7ada54c1c02496b5e7d4f962c9b6446157c456e5fb02b1edbe89c9ac1] <==
	E0908 10:43:45.239851       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53140: use of closed network connection
	E0908 10:43:48.176057       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58886: use of closed network connection
	E0908 10:43:49.609732       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58938: use of closed network connection
	E0908 10:43:56.614899       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58988: use of closed network connection
	I0908 10:43:59.130098       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 10:43:59.589386       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.79.39"}
	I0908 10:43:59.602402       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.155.103"}
	I0908 10:44:15.997615       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:44:26.975432       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:45:22.162192       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:45:47.027529       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:46:36.465939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:47:02.185332       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:47:53.273225       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:48:10.683093       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:49:00.546439       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:49:15.290103       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:50:01.937768       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:50:20.957476       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:51:02.388675       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:51:34.529589       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:52:07.237190       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:52:35.066643       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 10:53:01.898778       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 10:53:25.341470       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [362d1cccf89ca6621f5272d2a4e5470f4a65dabdf71cb22104e35aca2b2d5968] <==
	I0908 10:42:23.822158       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 10:42:23.824727       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 10:42:23.833426       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0908 10:42:23.836719       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 10:42:23.838997       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 10:42:23.841388       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 10:42:23.859433       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 10:42:23.859467       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0908 10:42:23.859504       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0908 10:42:23.859569       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 10:42:23.859726       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 10:42:23.859771       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-548498"
	I0908 10:42:23.859830       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 10:42:23.859861       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 10:42:23.859877       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 10:42:23.859888       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 10:42:23.859895       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 10:42:23.860126       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 10:42:23.860127       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 10:42:23.860232       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 10:42:23.860750       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 10:42:23.861006       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 10:42:23.864622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0908 10:42:23.864807       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 10:42:23.922580       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [6e211a68d677522b64377ceab69b58452feacd496972e896bb5d7ea8e8bb62a6] <==
	I0908 10:43:04.877432       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 10:43:04.877553       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 10:43:04.877613       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 10:43:04.877693       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 10:43:04.877123       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 10:43:04.877913       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 10:43:04.877968       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 10:43:04.877990       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 10:43:04.878456       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 10:43:04.878491       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 10:43:04.878518       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 10:43:04.878824       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-548498"
	I0908 10:43:04.879027       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 10:43:04.880414       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 10:43:04.891729       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 10:43:04.891770       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 10:43:04.891781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 10:43:04.924486       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0908 10:43:59.292285       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.299626       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.311858       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.379365       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.382073       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.383586       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 10:43:59.387723       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [44a94b3c475848c6fff454abbf0cfb6c57da30c91d901ccc24507e31d676b64f] <==
	I0908 10:42:17.005026       1 server_linux.go:53] "Using iptables proxy"
	I0908 10:42:17.409434       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 10:42:20.675858       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 10:42:20.676030       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 10:42:20.676213       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 10:42:20.801919       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 10:42:20.802107       1 server_linux.go:132] "Using iptables Proxier"
	I0908 10:42:20.807751       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 10:42:20.808215       1 server.go:527] "Version info" version="v1.34.0"
	I0908 10:42:20.808246       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:42:20.809806       1 config.go:200] "Starting service config controller"
	I0908 10:42:20.809841       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 10:42:20.809990       1 config.go:309] "Starting node config controller"
	I0908 10:42:20.810006       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 10:42:20.810014       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 10:42:20.810358       1 config.go:106] "Starting endpoint slice config controller"
	I0908 10:42:20.810475       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 10:42:20.810424       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 10:42:20.810501       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 10:42:20.910239       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 10:42:20.911365       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 10:42:20.911409       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [46a87998e33d2f11818d71b16189edc088b1b312deeb4603af22d506c3b55b87] <==
	I0908 10:43:02.695866       1 server_linux.go:53] "Using iptables proxy"
	I0908 10:43:02.837461       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 10:43:02.938171       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 10:43:02.938223       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 10:43:02.938336       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 10:43:03.001252       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 10:43:03.001320       1 server_linux.go:132] "Using iptables Proxier"
	I0908 10:43:03.006362       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 10:43:03.006832       1 server.go:527] "Version info" version="v1.34.0"
	I0908 10:43:03.006856       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:43:03.008099       1 config.go:200] "Starting service config controller"
	I0908 10:43:03.008209       1 config.go:309] "Starting node config controller"
	I0908 10:43:03.008280       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 10:43:03.008312       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 10:43:03.008119       1 config.go:106] "Starting endpoint slice config controller"
	I0908 10:43:03.008365       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 10:43:03.008134       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 10:43:03.008411       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 10:43:03.008208       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 10:43:03.108992       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 10:43:03.109024       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 10:43:03.109089       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [81dc5e76731931d049420db32b74418fde758c66f1da54ac34c5c07ed2bb51c6] <==
	I0908 10:43:00.006999       1 serving.go:386] Generated self-signed cert in-memory
	I0908 10:43:02.976854       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 10:43:02.976956       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:43:02.982389       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 10:43:02.982428       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 10:43:02.982443       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:43:02.982463       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:43:02.982469       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:43:02.982478       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:43:02.982961       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 10:43:02.983055       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 10:43:03.082847       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 10:43:03.082957       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 10:43:03.083614       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e0702aa8a931154b19ab68040a86b6c489a825ed6d6a416bd0fcbde279396b1b] <==
	I0908 10:42:18.102518       1 serving.go:386] Generated self-signed cert in-memory
	W0908 10:42:20.204447       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 10:42:20.204485       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 10:42:20.204497       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 10:42:20.204505       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 10:42:20.595394       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 10:42:20.595696       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 10:42:20.598667       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 10:42:20.598770       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:42:20.598782       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:42:20.598795       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 10:42:20.699884       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 10:42:48.797916       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0908 10:42:48.797978       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0908 10:42:48.798087       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0908 10:42:48.798149       1 server.go:265] "[graceful-termination] secure server is exiting"
	I0908 10:42:48.798272       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0908 10:42:48.798368       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 08 10:52:58 functional-548498 kubelet[5893]: E0908 10:52:58.037481    5893 manager.go:1116] Failed to create existing container: /crio-72e6f9a3ffc04addb62ba489f91dc4de7e578b3c504fb1c2bc0a8eeaf28da876: Error finding container 72e6f9a3ffc04addb62ba489f91dc4de7e578b3c504fb1c2bc0a8eeaf28da876: Status 404 returned error can't find the container with id 72e6f9a3ffc04addb62ba489f91dc4de7e578b3c504fb1c2bc0a8eeaf28da876
	Sep 08 10:52:58 functional-548498 kubelet[5893]: E0908 10:52:58.037660    5893 manager.go:1116] Failed to create existing container: /docker/7da1d7fb2bd29696e491dc431e3cf8ae865b27dcaebe7cb692a1fd66d1e71e74/crio-e354c401f336897c7826a4c6d3a40012369be26f5eff608f45eac6185d876204: Error finding container e354c401f336897c7826a4c6d3a40012369be26f5eff608f45eac6185d876204: Status 404 returned error can't find the container with id e354c401f336897c7826a4c6d3a40012369be26f5eff608f45eac6185d876204
	Sep 08 10:52:58 functional-548498 kubelet[5893]: E0908 10:52:58.037876    5893 manager.go:1116] Failed to create existing container: /crio-124c0e9f8bee30cd0d0ddfacf85752c749b51e6d5d092e17652d5c190293a7d3: Error finding container 124c0e9f8bee30cd0d0ddfacf85752c749b51e6d5d092e17652d5c190293a7d3: Status 404 returned error can't find the container with id 124c0e9f8bee30cd0d0ddfacf85752c749b51e6d5d092e17652d5c190293a7d3
	Sep 08 10:52:58 functional-548498 kubelet[5893]: E0908 10:52:58.038060    5893 manager.go:1116] Failed to create existing container: /docker/7da1d7fb2bd29696e491dc431e3cf8ae865b27dcaebe7cb692a1fd66d1e71e74/crio-28c56ba2f71061e9d41d495e728161bc46fe25c7bfa9bac144a9c801d37a71fa: Error finding container 28c56ba2f71061e9d41d495e728161bc46fe25c7bfa9bac144a9c801d37a71fa: Status 404 returned error can't find the container with id 28c56ba2f71061e9d41d495e728161bc46fe25c7bfa9bac144a9c801d37a71fa
	Sep 08 10:52:58 functional-548498 kubelet[5893]: E0908 10:52:58.038234    5893 manager.go:1116] Failed to create existing container: /crio-bf478c3ae9587e0395027d8723f299ff5d6c566329d5b1cd298f8e4cc48f175c: Error finding container bf478c3ae9587e0395027d8723f299ff5d6c566329d5b1cd298f8e4cc48f175c: Status 404 returned error can't find the container with id bf478c3ae9587e0395027d8723f299ff5d6c566329d5b1cd298f8e4cc48f175c
	Sep 08 10:52:58 functional-548498 kubelet[5893]: E0908 10:52:58.038400    5893 manager.go:1116] Failed to create existing container: /crio-0eb8cecfd2ca88fcdc9940e7488cf463a9b3022c0d884d0460009908009b779c: Error finding container 0eb8cecfd2ca88fcdc9940e7488cf463a9b3022c0d884d0460009908009b779c: Status 404 returned error can't find the container with id 0eb8cecfd2ca88fcdc9940e7488cf463a9b3022c0d884d0460009908009b779c
	Sep 08 10:52:58 functional-548498 kubelet[5893]: E0908 10:52:58.038574    5893 manager.go:1116] Failed to create existing container: /crio-e354c401f336897c7826a4c6d3a40012369be26f5eff608f45eac6185d876204: Error finding container e354c401f336897c7826a4c6d3a40012369be26f5eff608f45eac6185d876204: Status 404 returned error can't find the container with id e354c401f336897c7826a4c6d3a40012369be26f5eff608f45eac6185d876204
	Sep 08 10:52:58 functional-548498 kubelet[5893]: E0908 10:52:58.038761    5893 manager.go:1116] Failed to create existing container: /docker/7da1d7fb2bd29696e491dc431e3cf8ae865b27dcaebe7cb692a1fd66d1e71e74/crio-bf478c3ae9587e0395027d8723f299ff5d6c566329d5b1cd298f8e4cc48f175c: Error finding container bf478c3ae9587e0395027d8723f299ff5d6c566329d5b1cd298f8e4cc48f175c: Status 404 returned error can't find the container with id bf478c3ae9587e0395027d8723f299ff5d6c566329d5b1cd298f8e4cc48f175c
	Sep 08 10:52:58 functional-548498 kubelet[5893]: E0908 10:52:58.038944    5893 manager.go:1116] Failed to create existing container: /docker/7da1d7fb2bd29696e491dc431e3cf8ae865b27dcaebe7cb692a1fd66d1e71e74/crio-1a88f18b0119d0862bf097a7a2b6db2d60e8882262816b4b8b8ee0c49f99be8a: Error finding container 1a88f18b0119d0862bf097a7a2b6db2d60e8882262816b4b8b8ee0c49f99be8a: Status 404 returned error can't find the container with id 1a88f18b0119d0862bf097a7a2b6db2d60e8882262816b4b8b8ee0c49f99be8a
	Sep 08 10:52:58 functional-548498 kubelet[5893]: E0908 10:52:58.174946    5893 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328778174707492  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 10:52:58 functional-548498 kubelet[5893]: E0908 10:52:58.174986    5893 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328778174707492  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 10:52:59 functional-548498 kubelet[5893]: E0908 10:52:59.904238    5893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7rvb5" podUID="8b40f526-142f-44b2-95f9-359ea0f7f4da"
	Sep 08 10:53:05 functional-548498 kubelet[5893]: E0908 10:53:05.904488    5893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-kttfw" podUID="c452c08d-1f0e-48c8-870e-f45054aece58"
	Sep 08 10:53:08 functional-548498 kubelet[5893]: E0908 10:53:08.176712    5893 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328788176393347  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 10:53:08 functional-548498 kubelet[5893]: E0908 10:53:08.176758    5893 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328788176393347  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 10:53:14 functional-548498 kubelet[5893]: E0908 10:53:14.903494    5893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7rvb5" podUID="8b40f526-142f-44b2-95f9-359ea0f7f4da"
	Sep 08 10:53:18 functional-548498 kubelet[5893]: E0908 10:53:18.178302    5893 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328798178098413  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 10:53:18 functional-548498 kubelet[5893]: E0908 10:53:18.178341    5893 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328798178098413  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 10:53:18 functional-548498 kubelet[5893]: E0908 10:53:18.903493    5893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-kttfw" podUID="c452c08d-1f0e-48c8-870e-f45054aece58"
	Sep 08 10:53:28 functional-548498 kubelet[5893]: E0908 10:53:28.179971    5893 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328808179713561  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 10:53:28 functional-548498 kubelet[5893]: E0908 10:53:28.180014    5893 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328808179713561  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 10:53:29 functional-548498 kubelet[5893]: E0908 10:53:29.904600    5893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7rvb5" podUID="8b40f526-142f-44b2-95f9-359ea0f7f4da"
	Sep 08 10:53:33 functional-548498 kubelet[5893]: E0908 10:53:33.906112    5893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-kttfw" podUID="c452c08d-1f0e-48c8-870e-f45054aece58"
	Sep 08 10:53:38 functional-548498 kubelet[5893]: E0908 10:53:38.181439    5893 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757328818181218576  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 10:53:38 functional-548498 kubelet[5893]: E0908 10:53:38.181478    5893 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757328818181218576  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	
	
	==> kubernetes-dashboard [ce184ddae03e54b03ef64260f932a1ea36c4d2bc83dd1cb6d1e985ce894c7497] <==
	2025/09/08 10:44:03 Starting overwatch
	2025/09/08 10:44:03 Using namespace: kubernetes-dashboard
	2025/09/08 10:44:03 Using in-cluster config to connect to apiserver
	2025/09/08 10:44:03 Using secret token for csrf signing
	2025/09/08 10:44:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/08 10:44:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/08 10:44:03 Successful initial request to the apiserver, version: v1.34.0
	2025/09/08 10:44:03 Generating JWE encryption key
	2025/09/08 10:44:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/08 10:44:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/08 10:44:04 Initializing JWE encryption key from synchronized object
	2025/09/08 10:44:04 Creating in-cluster Sidecar client
	2025/09/08 10:44:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/08 10:44:04 Serving insecurely on HTTP port: 9090
	2025/09/08 10:44:34 Successful request to sidecar
	
	
	==> storage-provisioner [0e7c94a01ef044931ca39c5f05f8c03011d120f031fd13dca9a1f9017b34d8d7] <==
	W0908 10:53:16.799770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:18.803486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:18.809819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:20.813456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:20.818499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:22.821892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:22.827383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:24.830494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:24.834607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:26.838310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:26.843706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:28.846848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:28.850749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:30.854595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:30.859263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:32.862583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:32.869221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:34.872509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:34.877140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:36.880541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:36.886196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:38.889156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:38.893363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:40.897006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:53:40.901780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [940c868ab86e8da7933405feac794732a3c95ca3da128726a4a751d6cddfa8c6] <==
	I0908 10:42:20.589012       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 10:42:20.593454       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 10:42:20.683968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:24.140616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:28.401072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:31.999320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:35.053438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:38.076343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:38.080844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 10:42:38.081017       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 10:42:38.081076       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99b29cd4-59e1-4d7a-9b71-bce8ca0d96aa", APIVersion:"v1", ResourceVersion:"539", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-548498_9ad241cc-b3f3-4496-a41e-20e1843123e6 became leader
	I0908 10:42:38.081210       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-548498_9ad241cc-b3f3-4496-a41e-20e1843123e6!
	W0908 10:42:38.083890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:38.087580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 10:42:38.182079       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-548498_9ad241cc-b3f3-4496-a41e-20e1843123e6!
	W0908 10:42:40.091438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:40.096343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:42.100680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:42.105365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:44.109044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:44.112994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:46.116136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:46.121641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:48.125190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 10:42:48.129679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-548498 -n functional-548498
helpers_test.go:269: (dbg) Run:  kubectl --context functional-548498 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-7rvb5 hello-node-connect-7d85dfc575-kttfw
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-548498 describe pod busybox-mount hello-node-75c85bcc94-7rvb5 hello-node-connect-7d85dfc575-kttfw
helpers_test.go:290: (dbg) kubectl --context functional-548498 describe pod busybox-mount hello-node-75c85bcc94-7rvb5 hello-node-connect-7d85dfc575-kttfw:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-548498/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 10:43:50 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://9b7723a72e2d54d6481c02ccab4b5e9c047758d4cf1d2373107567eea0fa79c6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 10:43:53 +0000
	      Finished:     Mon, 08 Sep 2025 10:43:53 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bpmq7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-bpmq7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m52s  default-scheduler  Successfully assigned default/busybox-mount to functional-548498
	  Normal  Pulling    9m52s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m50s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.214s (2.214s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m50s  kubelet            Created container: mount-munger
	  Normal  Started    9m50s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7rvb5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-548498/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 10:43:44 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dr6mv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dr6mv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m58s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7rvb5 to functional-548498
	  Normal   Pulling    6m58s (x5 over 9m58s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m58s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m58s (x5 over 9m58s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m46s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m46s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-kttfw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-548498/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 10:43:39 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rt7rh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rt7rh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-kttfw to functional-548498
	  Normal   Pulling    6m55s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m55s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m55s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-548498 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-548498 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7rvb5" [8b40f526-142f-44b2-95f9-359ea0f7f4da] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-548498 -n functional-548498
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-08 10:53:45.30737391 +0000 UTC m=+1188.039471938
functional_test.go:1460: (dbg) Run:  kubectl --context functional-548498 describe po hello-node-75c85bcc94-7rvb5 -n default
functional_test.go:1460: (dbg) kubectl --context functional-548498 describe po hello-node-75c85bcc94-7rvb5 -n default:
Name:             hello-node-75c85bcc94-7rvb5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-548498/192.168.49.2
Start Time:       Mon, 08 Sep 2025 10:43:44 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dr6mv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dr6mv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7rvb5 to functional-548498
Normal   Pulling    7m (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-548498 logs hello-node-75c85bcc94-7rvb5 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-548498 logs hello-node-75c85bcc94-7rvb5 -n default: exit status 1 (66.137519ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-7rvb5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-548498 logs hello-node-75c85bcc94-7rvb5 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 service --namespace=default --https --url hello-node: exit status 115 (531.262287ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31980
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-548498 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 service hello-node --url --format={{.IP}}: exit status 115 (521.7106ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-548498 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 service hello-node --url: exit status 115 (528.335581ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31980
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-548498 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31980
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    

Test pass (299/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.57
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 5.73
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.23
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.17
21 TestBinaryMirror 0.82
22 TestOffline 90.26
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 169.27
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 9.55
35 TestAddons/parallel/Registry 22.18
36 TestAddons/parallel/RegistryCreds 0.65
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.7
41 TestAddons/parallel/CSI 52.17
42 TestAddons/parallel/Headlamp 26.63
43 TestAddons/parallel/CloudSpanner 5.84
44 TestAddons/parallel/LocalPath 52.69
45 TestAddons/parallel/NvidiaDevicePlugin 5.47
46 TestAddons/parallel/Yakd 10.74
47 TestAddons/parallel/AmdGpuDevicePlugin 6.48
48 TestAddons/StoppedEnableDisable 12.18
49 TestCertOptions 31.81
50 TestCertExpiration 238.13
52 TestForceSystemdFlag 32.51
53 TestForceSystemdEnv 29.51
55 TestKVMDriverInstallOrUpdate 1.59
59 TestErrorSpam/setup 24.58
60 TestErrorSpam/start 0.62
61 TestErrorSpam/status 0.92
62 TestErrorSpam/pause 1.59
63 TestErrorSpam/unpause 1.79
64 TestErrorSpam/stop 1.39
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 70.77
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 44.43
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.03
76 TestFunctional/serial/CacheCmd/cache/add_local 0.98
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.76
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 30.03
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.44
87 TestFunctional/serial/LogsFileCmd 1.46
88 TestFunctional/serial/InvalidService 4.12
90 TestFunctional/parallel/ConfigCmd 0.4
91 TestFunctional/parallel/DashboardCmd 7.87
92 TestFunctional/parallel/DryRun 0.39
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.98
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 28.62
102 TestFunctional/parallel/SSHCmd 1.09
103 TestFunctional/parallel/CpCmd 2.31
104 TestFunctional/parallel/MySQL 23.05
105 TestFunctional/parallel/FileSync 0.33
106 TestFunctional/parallel/CertSync 2.11
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
114 TestFunctional/parallel/License 0.25
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.55
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.32
122 TestFunctional/parallel/ImageCommands/Setup 0.47
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.88
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.69
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.81
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.37
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.4
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.7
135 TestFunctional/parallel/ImageCommands/ImageRemove 2.23
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.96
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.84
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
146 TestFunctional/parallel/ProfileCmd/profile_list 0.39
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
148 TestFunctional/parallel/MountCmd/any-port 7.54
149 TestFunctional/parallel/MountCmd/specific-port 2
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.11
151 TestFunctional/parallel/ServiceCmd/List 1.69
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 172.6
164 TestMultiControlPlane/serial/DeployApp 5.3
165 TestMultiControlPlane/serial/PingHostFromPods 1.13
166 TestMultiControlPlane/serial/AddWorkerNode 58.4
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
169 TestMultiControlPlane/serial/CopyFile 16.52
170 TestMultiControlPlane/serial/StopSecondaryNode 12.58
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
172 TestMultiControlPlane/serial/RestartSecondaryNode 20.62
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 145.55
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.47
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
177 TestMultiControlPlane/serial/StopCluster 35.74
178 TestMultiControlPlane/serial/RestartCluster 58.17
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
180 TestMultiControlPlane/serial/AddSecondaryNode 77.41
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
185 TestJSONOutput/start/Command 72.09
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.69
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.63
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.81
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 30.05
211 TestKicCustomNetwork/use_default_bridge_network 25.07
212 TestKicExistingNetwork 25.36
213 TestKicCustomSubnet 28.15
214 TestKicStaticIP 27.71
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 56.19
219 TestMountStart/serial/StartWithMountFirst 8.08
220 TestMountStart/serial/VerifyMountFirst 0.25
221 TestMountStart/serial/StartWithMountSecond 5.45
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.64
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.19
226 TestMountStart/serial/RestartStopped 7.23
227 TestMountStart/serial/VerifyMountPostStop 0.25
230 TestMultiNode/serial/FreshStart2Nodes 127.37
231 TestMultiNode/serial/DeployApp2Nodes 4.94
232 TestMultiNode/serial/PingHostFrom2Pods 0.8
233 TestMultiNode/serial/AddNode 57
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.64
236 TestMultiNode/serial/CopyFile 9.39
237 TestMultiNode/serial/StopNode 2.13
238 TestMultiNode/serial/StartAfterStop 7.37
239 TestMultiNode/serial/RestartKeepsNodes 74.73
240 TestMultiNode/serial/DeleteNode 5.37
241 TestMultiNode/serial/StopMultiNode 23.88
242 TestMultiNode/serial/RestartMultiNode 47.57
243 TestMultiNode/serial/ValidateNameConflict 24.47
248 TestPreload 116.79
250 TestScheduledStopUnix 103.18
253 TestInsufficientStorage 12.6
254 TestRunningBinaryUpgrade 46.57
256 TestKubernetesUpgrade 331.66
257 TestMissingContainerUpgrade 79.21
259 TestStoppedBinaryUpgrade/Setup 0.66
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 35.6
265 TestStoppedBinaryUpgrade/Upgrade 66.49
270 TestNetworkPlugins/group/false 9.71
274 TestNoKubernetes/serial/StartWithStopK8s 27.97
275 TestNoKubernetes/serial/Start 8.05
276 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
278 TestNoKubernetes/serial/ProfileList 6.63
279 TestNoKubernetes/serial/Stop 1.22
280 TestNoKubernetes/serial/StartNoArgs 9.34
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
290 TestPause/serial/Start 77
291 TestNetworkPlugins/group/auto/Start 71
292 TestPause/serial/SecondStartNoReconfiguration 51.06
293 TestNetworkPlugins/group/auto/KubeletFlags 0.29
294 TestNetworkPlugins/group/auto/NetCatPod 10.26
295 TestNetworkPlugins/group/auto/DNS 0.13
296 TestNetworkPlugins/group/auto/Localhost 0.11
297 TestNetworkPlugins/group/auto/HairPin 0.11
298 TestPause/serial/Pause 0.75
299 TestPause/serial/VerifyStatus 0.33
300 TestPause/serial/Unpause 0.67
301 TestPause/serial/PauseAgain 0.93
302 TestPause/serial/DeletePaused 2.78
303 TestPause/serial/VerifyDeletedResources 30.59
304 TestNetworkPlugins/group/kindnet/Start 44.52
305 TestNetworkPlugins/group/calico/Start 59.33
306 TestNetworkPlugins/group/custom-flannel/Start 52.17
307 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
309 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
310 TestNetworkPlugins/group/kindnet/DNS 0.16
311 TestNetworkPlugins/group/kindnet/Localhost 0.13
312 TestNetworkPlugins/group/kindnet/HairPin 0.14
313 TestNetworkPlugins/group/enable-default-cni/Start 65.97
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
317 TestNetworkPlugins/group/calico/KubeletFlags 0.29
318 TestNetworkPlugins/group/calico/NetCatPod 10.21
319 TestNetworkPlugins/group/custom-flannel/DNS 0.15
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
322 TestNetworkPlugins/group/calico/DNS 0.18
323 TestNetworkPlugins/group/calico/Localhost 0.15
324 TestNetworkPlugins/group/calico/HairPin 0.14
325 TestNetworkPlugins/group/flannel/Start 56.84
326 TestNetworkPlugins/group/bridge/Start 61.83
327 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
328 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.3
329 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
330 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
331 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
333 TestStartStop/group/old-k8s-version/serial/FirstStart 57.67
335 TestStartStop/group/no-preload/serial/FirstStart 69.75
336 TestNetworkPlugins/group/flannel/ControllerPod 6.01
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
338 TestNetworkPlugins/group/flannel/NetCatPod 9.18
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
340 TestNetworkPlugins/group/bridge/NetCatPod 9.23
341 TestNetworkPlugins/group/flannel/DNS 0.17
342 TestNetworkPlugins/group/flannel/Localhost 0.15
343 TestNetworkPlugins/group/flannel/HairPin 0.16
344 TestNetworkPlugins/group/bridge/DNS 0.16
345 TestNetworkPlugins/group/bridge/Localhost 0.13
346 TestNetworkPlugins/group/bridge/HairPin 0.15
348 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 75.85
350 TestStartStop/group/newest-cni/serial/FirstStart 33.38
351 TestStartStop/group/old-k8s-version/serial/DeployApp 8.35
352 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.11
353 TestStartStop/group/old-k8s-version/serial/Stop 12.19
354 TestStartStop/group/no-preload/serial/DeployApp 9.32
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
356 TestStartStop/group/old-k8s-version/serial/SecondStart 49.65
357 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
358 TestStartStop/group/newest-cni/serial/DeployApp 0
359 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.93
360 TestStartStop/group/no-preload/serial/Stop 12.09
361 TestStartStop/group/newest-cni/serial/Stop 1.2
362 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
363 TestStartStop/group/newest-cni/serial/SecondStart 15.91
364 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
365 TestStartStop/group/no-preload/serial/SecondStart 54.37
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
369 TestStartStop/group/newest-cni/serial/Pause 3.08
371 TestStartStop/group/embed-certs/serial/FirstStart 78.91
372 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
374 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.05
375 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
376 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
377 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
378 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 47.42
379 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
380 TestStartStop/group/old-k8s-version/serial/Pause 2.94
381 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
383 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
384 TestStartStop/group/no-preload/serial/Pause 3.11
385 TestStartStop/group/embed-certs/serial/DeployApp 9.24
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
388 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
389 TestStartStop/group/embed-certs/serial/Stop 11.93
390 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
391 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.73
392 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
393 TestStartStop/group/embed-certs/serial/SecondStart 48.41
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
397 TestStartStop/group/embed-certs/serial/Pause 2.78
x
+
TestDownloadOnly/v1.28.0/json-events (5.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-661714 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-661714 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.571400472s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 10:34:02.884641  264164 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0908 10:34:02.884780  264164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21503-260352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-661714
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-661714: exit status 85 (69.014654ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-661714 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-661714 │ jenkins │ v1.36.0 │ 08 Sep 25 10:33 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 10:33:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 10:33:57.359432  264176 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:33:57.359720  264176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:33:57.359732  264176 out.go:374] Setting ErrFile to fd 2...
	I0908 10:33:57.359739  264176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:33:57.359982  264176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
	W0908 10:33:57.360140  264176 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21503-260352/.minikube/config/config.json: open /home/jenkins/minikube-integration/21503-260352/.minikube/config/config.json: no such file or directory
	I0908 10:33:57.360805  264176 out.go:368] Setting JSON to true
	I0908 10:33:57.362618  264176 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4581,"bootTime":1757323056,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:33:57.362725  264176 start.go:140] virtualization: kvm guest
	I0908 10:33:57.365337  264176 out.go:99] [download-only-661714] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0908 10:33:57.365544  264176 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21503-260352/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 10:33:57.365591  264176 notify.go:220] Checking for updates...
	I0908 10:33:57.367391  264176 out.go:171] MINIKUBE_LOCATION=21503
	I0908 10:33:57.369273  264176 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:33:57.370999  264176 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	I0908 10:33:57.372709  264176 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	I0908 10:33:57.374356  264176 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 10:33:57.377136  264176 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 10:33:57.377382  264176 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:33:57.399400  264176 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 10:33:57.399530  264176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 10:33:57.814184  264176 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-09-08 10:33:57.801531624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 10:33:57.814340  264176 docker.go:318] overlay module found
	I0908 10:33:57.816279  264176 out.go:99] Using the docker driver based on user configuration
	I0908 10:33:57.816330  264176 start.go:304] selected driver: docker
	I0908 10:33:57.816343  264176 start.go:918] validating driver "docker" against <nil>
	I0908 10:33:57.816496  264176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 10:33:57.875669  264176 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-09-08 10:33:57.866699398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 10:33:57.875832  264176 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 10:33:57.876421  264176 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0908 10:33:57.876595  264176 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 10:33:57.878705  264176 out.go:171] Using Docker driver with root privileges
	I0908 10:33:57.880016  264176 cni.go:84] Creating CNI manager for ""
	I0908 10:33:57.880088  264176 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 10:33:57.880100  264176 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 10:33:57.880177  264176 start.go:348] cluster config:
	{Name:download-only-661714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-661714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:33:57.881575  264176 out.go:99] Starting "download-only-661714" primary control-plane node in "download-only-661714" cluster
	I0908 10:33:57.881603  264176 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 10:33:57.882780  264176 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 10:33:57.882809  264176 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 10:33:57.882867  264176 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 10:33:57.899473  264176 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 10:33:57.899719  264176 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 10:33:57.899845  264176 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 10:33:57.907346  264176 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0908 10:33:57.907370  264176 cache.go:58] Caching tarball of preloaded images
	I0908 10:33:57.907532  264176 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 10:33:57.909615  264176 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 10:33:57.909642  264176 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 10:33:57.934640  264176 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21503-260352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0908 10:34:01.346759  264176 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 10:34:01.346844  264176 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21503-260352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 10:34:02.291582  264176 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0908 10:34:02.292016  264176 profile.go:143] Saving config to /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/download-only-661714/config.json ...
	I0908 10:34:02.292055  264176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/download-only-661714/config.json: {Name:mk7f5f766addb5d08333b317e771ac14f66e93e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 10:34:02.292283  264176 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 10:34:02.292493  264176 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21503-260352/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-661714 host does not exist
	  To start a cluster, run: "minikube start -p download-only-661714"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-661714
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-633311 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-633311 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.729156597s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 10:34:09.048211  264164 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0908 10:34:09.048277  264164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21503-260352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-633311
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-633311: exit status 85 (69.469389ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-661714 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-661714 │ jenkins │ v1.36.0 │ 08 Sep 25 10:33 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ delete  │ -p download-only-661714                                                                                                                                                   │ download-only-661714 │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │ 08 Sep 25 10:34 UTC │
	│ start   │ -o=json --download-only -p download-only-633311 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-633311 │ jenkins │ v1.36.0 │ 08 Sep 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 10:34:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 10:34:03.362566  264517 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:34:03.362687  264517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:34:03.362699  264517 out.go:374] Setting ErrFile to fd 2...
	I0908 10:34:03.362706  264517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:34:03.362898  264517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
	I0908 10:34:03.363502  264517 out.go:368] Setting JSON to true
	I0908 10:34:03.364414  264517 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4587,"bootTime":1757323056,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:34:03.364531  264517 start.go:140] virtualization: kvm guest
	I0908 10:34:03.366689  264517 out.go:99] [download-only-633311] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 10:34:03.366878  264517 notify.go:220] Checking for updates...
	I0908 10:34:03.368089  264517 out.go:171] MINIKUBE_LOCATION=21503
	I0908 10:34:03.369487  264517 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:34:03.370980  264517 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	I0908 10:34:03.372170  264517 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	I0908 10:34:03.373653  264517 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 10:34:03.376143  264517 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 10:34:03.376396  264517 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:34:03.399449  264517 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 10:34:03.399573  264517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 10:34:03.453701  264517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-09-08 10:34:03.443730808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 10:34:03.453803  264517 docker.go:318] overlay module found
	I0908 10:34:03.455710  264517 out.go:99] Using the docker driver based on user configuration
	I0908 10:34:03.455751  264517 start.go:304] selected driver: docker
	I0908 10:34:03.455759  264517 start.go:918] validating driver "docker" against <nil>
	I0908 10:34:03.455856  264517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 10:34:03.512223  264517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-09-08 10:34:03.503378998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 10:34:03.512396  264517 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 10:34:03.512896  264517 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0908 10:34:03.513083  264517 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 10:34:03.515082  264517 out.go:171] Using Docker driver with root privileges
	I0908 10:34:03.516382  264517 cni.go:84] Creating CNI manager for ""
	I0908 10:34:03.516454  264517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 10:34:03.516466  264517 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 10:34:03.516557  264517 start.go:348] cluster config:
	{Name:download-only-633311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-633311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:34:03.517888  264517 out.go:99] Starting "download-only-633311" primary control-plane node in "download-only-633311" cluster
	I0908 10:34:03.517923  264517 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 10:34:03.519136  264517 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 10:34:03.519164  264517 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 10:34:03.519271  264517 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 10:34:03.536647  264517 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 10:34:03.536782  264517 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 10:34:03.536801  264517 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 10:34:03.536805  264517 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 10:34:03.536815  264517 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 10:34:03.544310  264517 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 10:34:03.544345  264517 cache.go:58] Caching tarball of preloaded images
	I0908 10:34:03.544523  264517 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 10:34:03.546421  264517 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0908 10:34:03.546457  264517 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 10:34:03.573646  264517 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21503-260352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 10:34:07.756857  264517 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 10:34:07.756960  264517 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21503-260352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-633311 host does not exist
	  To start a cluster, run: "minikube start -p download-only-633311"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-633311
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.17s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-550628 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-550628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-550628
--- PASS: TestDownloadOnlyKic (1.17s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 10:34:10.947977  264164 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-303365 --alsologtostderr --binary-mirror http://127.0.0.1:38671 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-303365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-303365
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (90.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-349159 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-349159 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m27.280705859s)
helpers_test.go:175: Cleaning up "offline-crio-349159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-349159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-349159: (2.98016401s)
--- PASS: TestOffline (90.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-310880
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-310880: exit status 85 (58.108301ms)

                                                
                                                
-- stdout --
	* Profile "addons-310880" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-310880"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-310880
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-310880: exit status 85 (56.458479ms)

                                                
                                                
-- stdout --
	* Profile "addons-310880" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-310880"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (169.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-310880 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-310880 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m49.268414404s)
--- PASS: TestAddons/Setup (169.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-310880 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-310880 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-310880 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-310880 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f5691420-51f4-4205-96e3-896c8bf2ee21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f5691420-51f4-4205-96e3-896c8bf2ee21] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004215362s
addons_test.go:694: (dbg) Run:  kubectl --context addons-310880 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-310880 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-310880 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.003903ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-v2kw8" [441d3a0d-f394-4350-a2e8-97c6310b39a6] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003709256s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-kcvdc" [cb54aa87-60bf-455c-89fb-e4717dde0d00] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003929715s
addons_test.go:392: (dbg) Run:  kubectl --context addons-310880 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-310880 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-310880 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (11.40022201s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 ip
2025/09/08 10:37:41 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.18s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.40018ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-310880
addons_test.go:332: (dbg) Run:  kubectl --context addons-310880 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-dzdkn" [d4b110d4-0134-40c0-b0b1-b4ad051ba3a8] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003622184s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.891527ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-ncmm5" [c170effb-94ae-4cc5-a6af-1f91971345c3] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003776561s
addons_test.go:463: (dbg) Run:  kubectl --context addons-310880 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 10:37:25.235028  264164 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 10:37:25.238624  264164 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 10:37:25.238646  264164 kapi.go:107] duration metric: took 3.649787ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.659542ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-310880 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-310880 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [762a7489-d99c-46cb-ac37-62ce6ce44de2] Pending
helpers_test.go:352: "task-pv-pod" [762a7489-d99c-46cb-ac37-62ce6ce44de2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [762a7489-d99c-46cb-ac37-62ce6ce44de2] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.003798657s
addons_test.go:572: (dbg) Run:  kubectl --context addons-310880 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-310880 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-310880 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-310880 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-310880 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-310880 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-310880 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [c4a51e25-1468-46d4-b8a1-32922fe7c1f6] Pending
helpers_test.go:352: "task-pv-pod-restore" [c4a51e25-1468-46d4-b8a1-32922fe7c1f6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [c4a51e25-1468-46d4-b8a1-32922fe7c1f6] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004203914s
addons_test.go:614: (dbg) Run:  kubectl --context addons-310880 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-310880 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-310880 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-310880 addons disable volumesnapshots --alsologtostderr -v=1: (1.002027283s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-310880 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.687906746s)
--- PASS: TestAddons/parallel/CSI (52.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (26.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-310880 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-8bghr" [1833758a-d04d-432e-a3f2-e40e03e1307e] Pending
helpers_test.go:352: "headlamp-6f46646d79-8bghr" [1833758a-d04d-432e-a3f2-e40e03e1307e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-8bghr" [1833758a-d04d-432e-a3f2-e40e03e1307e] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 20.004384729s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-310880 addons disable headlamp --alsologtostderr -v=1: (5.721884573s)
--- PASS: TestAddons/parallel/Headlamp (26.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-cfpbj" [e5ba5e85-4153-4e41-82df-ce96fe098226] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004185939s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.84s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-310880 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-310880 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-310880 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [01b88ac7-d47a-41b6-80e1-7874d538e1a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [01b88ac7-d47a-41b6-80e1-7874d538e1a5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [01b88ac7-d47a-41b6-80e1-7874d538e1a5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004514146s
addons_test.go:967: (dbg) Run:  kubectl --context addons-310880 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 ssh "cat /opt/local-path-provisioner/pvc-83775afe-2b66-4d94-a207-eeb453c0c82a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-310880 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-310880 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-310880 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.804321415s)
--- PASS: TestAddons/parallel/LocalPath (52.69s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-p887r" [3683dae7-40f7-454e-ab29-2bcead4c809b] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003941574s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-69mxf" [64d69fd9-024b-445e-8d28-5557326b61e6] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005041947s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-310880 addons disable yakd --alsologtostderr -v=1: (5.731908208s)
--- PASS: TestAddons/parallel/Yakd (10.74s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-8snfn" [f28519e1-a5b0-4c0d-88c6-881507390c2f] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003395519s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-310880
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-310880: (11.907666419s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-310880
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-310880
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-310880
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (31.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-672036 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-672036 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (28.949570016s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-672036 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-672036 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-672036 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-672036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-672036
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-672036: (2.126939816s)
--- PASS: TestCertOptions (31.81s)

                                                
                                    
x
+
TestCertExpiration (238.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-812784 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-812784 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (30.267783915s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-812784 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-812784 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (24.247420941s)
helpers_test.go:175: Cleaning up "cert-expiration-812784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-812784
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-812784: (3.605633664s)
--- PASS: TestCertExpiration (238.13s)

                                                
                                    
x
+
TestForceSystemdFlag (32.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-732441 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-732441 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.761025978s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-732441 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-732441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-732441
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-732441: (2.440840612s)
--- PASS: TestForceSystemdFlag (32.51s)

                                                
                                    
x
+
TestForceSystemdEnv (29.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-794844 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-794844 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.048923346s)
helpers_test.go:175: Cleaning up "force-systemd-env-794844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-794844
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-794844: (2.463694135s)
--- PASS: TestForceSystemdEnv (29.51s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.59s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0908 11:20:09.364231  264164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 11:20:09.364393  264164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0908 11:20:09.407791  264164 install.go:62] docker-machine-driver-kvm2: exit status 1
W0908 11:20:09.407958  264164 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 11:20:09.408018  264164 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3592767386/001/docker-machine-driver-kvm2
I0908 11:20:09.646783  264164 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3592767386/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc00011f560 gz:0xc00011f568 tar:0xc00011f500 tar.bz2:0xc00011f520 tar.gz:0xc00011f530 tar.xz:0xc00011f540 tar.zst:0xc00011f550 tbz2:0xc00011f520 tgz:0xc00011f530 txz:0xc00011f540 tzst:0xc00011f550 xz:0xc00011f570 zip:0xc00011f590 zst:0xc00011f578] Getters:map[file:0xc001e36bc0 http:0xc001754870 https:0xc0017548c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 11:20:09.646844  264164 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3592767386/001/docker-machine-driver-kvm2
I0908 11:20:10.275511  264164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 11:20:10.275772  264164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0908 11:20:10.316667  264164 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0908 11:20:10.316706  264164 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0908 11:20:10.316780  264164 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 11:20:10.316813  264164 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3592767386/002/docker-machine-driver-kvm2
I0908 11:20:10.344131  264164 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3592767386/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc00011f560 gz:0xc00011f568 tar:0xc00011f500 tar.bz2:0xc00011f520 tar.gz:0xc00011f530 tar.xz:0xc00011f540 tar.zst:0xc00011f550 tbz2:0xc00011f520 tgz:0xc00011f530 txz:0xc00011f540 tzst:0xc00011f550 xz:0xc00011f570 zip:0xc00011f590 zst:0xc00011f578] Getters:map[file:0xc001fa9320 http:0xc001b114f0 https:0xc001b11540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 11:20:10.344189  264164 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3592767386/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.59s)

                                                
                                    
x
+
TestErrorSpam/setup (24.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-598322 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-598322 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-598322 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-598322 --driver=docker  --container-runtime=crio: (24.58201506s)
--- PASS: TestErrorSpam/setup (24.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 stop: (1.191309452s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598322 --log_dir /tmp/nospam-598322 stop
--- PASS: TestErrorSpam/stop (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21503-260352/.minikube/files/etc/test/nested/copy/264164/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548498 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-548498 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m10.773632941s)
--- PASS: TestFunctional/serial/StartWithProxy (70.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 10:41:56.278152  264164 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548498 --alsologtostderr -v=8
E0908 10:42:01.736217  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:01.742651  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:01.754125  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:01.775611  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:01.817156  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:01.898643  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:02.060285  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:02.381990  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:03.023995  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:04.305578  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:06.867799  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:11.989248  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:42:22.231644  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-548498 --alsologtostderr -v=8: (44.43124282s)
functional_test.go:678: soft start took 44.431987953s for "functional-548498" cluster.
I0908 10:42:40.710205  264164 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (44.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-548498 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 cache add registry.k8s.io/pause:3.3
E0908 10:42:42.713037  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 cache add registry.k8s.io/pause:3.3: (1.083819812s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 cache add registry.k8s.io/pause:latest: (1.004300621s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-548498 /tmp/TestFunctionalserialCacheCmdcacheadd_local527745793/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 cache add minikube-local-cache-test:functional-548498
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 cache delete minikube-local-cache-test:functional-548498
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-548498
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.449133ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 kubectl -- --context functional-548498 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-548498 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548498 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-548498 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.028783387s)
functional_test.go:776: restart took 30.028939077s for "functional-548498" cluster.
I0908 10:43:17.374269  264164 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (30.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-548498 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 logs: (1.442949067s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 logs --file /tmp/TestFunctionalserialLogsFileCmd4246011740/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 logs --file /tmp/TestFunctionalserialLogsFileCmd4246011740/001/logs.txt: (1.462667258s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-548498 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-548498
E0908 10:43:23.674578  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-548498: exit status 115 (357.708841ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32719 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-548498 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 config get cpus: exit status 14 (61.956159ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 config get cpus: exit status 14 (75.691178ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-548498 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-548498 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 306801: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548498 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-548498 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (157.841017ms)

                                                
                                                
-- stdout --
	* [functional-548498] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 10:43:57.796647  305728 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:43:57.796965  305728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:57.796976  305728 out.go:374] Setting ErrFile to fd 2...
	I0908 10:43:57.796980  305728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:57.797251  305728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
	I0908 10:43:57.797816  305728 out.go:368] Setting JSON to false
	I0908 10:43:57.798885  305728 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5182,"bootTime":1757323056,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:43:57.798998  305728 start.go:140] virtualization: kvm guest
	I0908 10:43:57.801184  305728 out.go:179] * [functional-548498] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 10:43:57.802539  305728 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 10:43:57.802576  305728 notify.go:220] Checking for updates...
	I0908 10:43:57.805190  305728 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:43:57.806690  305728 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	I0908 10:43:57.808367  305728 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	I0908 10:43:57.809698  305728 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 10:43:57.811140  305728 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 10:43:57.812967  305728 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:43:57.813434  305728 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:43:57.837739  305728 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 10:43:57.837866  305728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 10:43:57.891144  305728 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 10:43:57.880985888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 10:43:57.891311  305728 docker.go:318] overlay module found
	I0908 10:43:57.893821  305728 out.go:179] * Using the docker driver based on existing profile
	I0908 10:43:57.895044  305728 start.go:304] selected driver: docker
	I0908 10:43:57.895061  305728 start.go:918] validating driver "docker" against &{Name:functional-548498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-548498 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:43:57.895156  305728 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 10:43:57.897431  305728 out.go:203] 
	W0908 10:43:57.898650  305728 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 10:43:57.900073  305728 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548498 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548498 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-548498 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (154.531535ms)

                                                
                                                
-- stdout --
	* [functional-548498] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 10:43:57.645025  305652 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:43:57.645328  305652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:57.645340  305652 out.go:374] Setting ErrFile to fd 2...
	I0908 10:43:57.645347  305652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:43:57.645696  305652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
	I0908 10:43:57.646292  305652 out.go:368] Setting JSON to false
	I0908 10:43:57.647331  305652 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5182,"bootTime":1757323056,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 10:43:57.647454  305652 start.go:140] virtualization: kvm guest
	I0908 10:43:57.649566  305652 out.go:179] * [functional-548498] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 10:43:57.650905  305652 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 10:43:57.650902  305652 notify.go:220] Checking for updates...
	I0908 10:43:57.653350  305652 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 10:43:57.654544  305652 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	I0908 10:43:57.655862  305652 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	I0908 10:43:57.657237  305652 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 10:43:57.658440  305652 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 10:43:57.660086  305652 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:43:57.660594  305652 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 10:43:57.683966  305652 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 10:43:57.684074  305652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 10:43:57.735597  305652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 10:43:57.725508586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 10:43:57.735742  305652 docker.go:318] overlay module found
	I0908 10:43:57.737717  305652 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 10:43:57.738991  305652 start.go:304] selected driver: docker
	I0908 10:43:57.739006  305652 start.go:918] validating driver "docker" against &{Name:functional-548498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-548498 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 10:43:57.739100  305652 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 10:43:57.741373  305652 out.go:203] 
	W0908 10:43:57.742707  305652 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 10:43:57.744027  305652 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c9471720-f86e-4a61-862a-ec76a9c86f65] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004647791s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-548498 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-548498 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-548498 get pvc myclaim -o=json
I0908 10:43:33.290773  264164 retry.go:31] will retry after 2.921645867s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:75590a78-6598-44b7-9d75-a6d942057621 ResourceVersion:725 Generation:0 CreationTimestamp:2025-09-08 10:43:33 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-75590a78-6598-44b7-9d75-a6d942057621 StorageClassName:0xc001b88890 VolumeMode:0xc001b888a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-548498 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-548498 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [274f84e3-be12-4b6f-9b57-b1627c5535a8] Pending
helpers_test.go:352: "sp-pod" [274f84e3-be12-4b6f-9b57-b1627c5535a8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [274f84e3-be12-4b6f-9b57-b1627c5535a8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003796806s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-548498 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-548498 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-548498 apply -f testdata/storage-provisioner/pod.yaml
I0908 10:43:50.491701  264164 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f0d34dd2-9009-4351-b362-a69238605cce] Pending
helpers_test.go:352: "sp-pod" [f0d34dd2-9009-4351-b362-a69238605cce] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004261342s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-548498 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh -n functional-548498 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 cp functional-548498:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1928985227/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh -n functional-548498 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh -n functional-548498 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-548498 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-xb8wk" [2b67f72f-fd28-4445-9430-00335e7da73e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-xb8wk" [2b67f72f-fd28-4445-9430-00335e7da73e] Running
I0908 10:43:36.495091  264164 detect.go:223] nested VM detected
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.031167972s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-548498 exec mysql-5bb876957f-xb8wk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-548498 exec mysql-5bb876957f-xb8wk -- mysql -ppassword -e "show databases;": exit status 1 (213.189362ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 10:43:42.590496  264164 retry.go:31] will retry after 1.191527349s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-548498 exec mysql-5bb876957f-xb8wk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-548498 exec mysql-5bb876957f-xb8wk -- mysql -ppassword -e "show databases;": exit status 1 (108.398339ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 10:43:43.890756  264164 retry.go:31] will retry after 1.235562476s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-548498 exec mysql-5bb876957f-xb8wk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-548498 exec mysql-5bb876957f-xb8wk -- mysql -ppassword -e "show databases;": exit status 1 (115.337476ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 10:43:45.242942  264164 retry.go:31] will retry after 2.818720263s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-548498 exec mysql-5bb876957f-xb8wk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/264164/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "sudo cat /etc/test/nested/copy/264164/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/264164.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "sudo cat /etc/ssl/certs/264164.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/264164.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "sudo cat /usr/share/ca-certificates/264164.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2641642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "sudo cat /etc/ssl/certs/2641642.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2641642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "sudo cat /usr/share/ca-certificates/2641642.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-548498 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 ssh "sudo systemctl is-active docker": exit status 1 (313.287277ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 ssh "sudo systemctl is-active containerd": exit status 1 (345.539ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-548498 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-548498
localhost/kicbase/echo-server:functional-548498
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-548498 image ls --format short --alsologtostderr:
I0908 10:44:01.085855  307363 out.go:360] Setting OutFile to fd 1 ...
I0908 10:44:01.085992  307363 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:01.086036  307363 out.go:374] Setting ErrFile to fd 2...
I0908 10:44:01.086044  307363 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:01.086389  307363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
I0908 10:44:01.087241  307363 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:01.087403  307363 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:01.088026  307363 cli_runner.go:164] Run: docker container inspect functional-548498 --format={{.State.Status}}
I0908 10:44:01.107586  307363 ssh_runner.go:195] Run: systemctl --version
I0908 10:44:01.107684  307363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-548498
I0908 10:44:01.129283  307363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/functional-548498/id_rsa Username:docker}
I0908 10:44:01.228536  307363 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-548498 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ 4a86014ec6994 │ 53.9MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/kicbase/echo-server           │ functional-548498  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/nginx                 │ latest             │ ad5708199ec7d │ 197MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-548498  │ edd2bc6de680b │ 3.33kB │
│ localhost/my-image                      │ functional-548498  │ 5577093bd97e9 │ 1.47MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-548498 image ls --format table --alsologtostderr:
I0908 10:44:06.064171  308044 out.go:360] Setting OutFile to fd 1 ...
I0908 10:44:06.064457  308044 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:06.064467  308044 out.go:374] Setting ErrFile to fd 2...
I0908 10:44:06.064472  308044 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:06.064698  308044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
I0908 10:44:06.065264  308044 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:06.065392  308044 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:06.065820  308044 cli_runner.go:164] Run: docker container inspect functional-548498 --format={{.State.Status}}
I0908 10:44:06.083863  308044 ssh_runner.go:195] Run: systemctl --version
I0908 10:44:06.083909  308044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-548498
I0908 10:44:06.102393  308044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/functional-548498/id_rsa Username:docker}
I0908 10:44:06.188362  308044 ssh_runner.go:195] Run: sudo crictl images --output json
E0908 10:44:45.596531  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:47:01.733287  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:47:29.438767  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:52:01.733155  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-548498 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"aea62dba91add31385f313430ba3978edf6ba7b595a09f41039040d97de529b6","repoDigests":["docker.io/library/be7f0ee45581cbdda26eab16b282f2331570c2923044f70b8bdce88ddcaa74f9-tmp@sha256:b3ac685fcda33f7f99aec32a4830ea193deff9eca2cf457907751cf2b81db258"],"repoTags":[],"size":"1465610"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},
{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e
568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57","docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7"],"repoTags":["docker.io/library/nginx:latest"],"size":"196544386"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2
f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84
805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-548498"],"size":"4943877"},{"id":"edd2bc6de680bf61fc9529af97831a028583a90cba16b14d6f9ebae5793c8b33","repoDigests":["localhost/minikube-local-cache-test@sha256:91345e7826771a0889b18e2227f07c78fdb3c0754569812a4f60393302debbbe"],"repoTags":["localhost/minikube-local-cache-test:functional-548498"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s
.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a26110
3315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53949946"},{"id":"5577093bd97e9c29bf1c0db45ba696f55b62
fa2028d4b459f622653a221e5408","repoDigests":["localhost/my-image@sha256:1d8a1a800f2b766223ff167941dcfc037bce7e154921514d01dd9d0c671be861"],"repoTags":["localhost/my-image:functional-548498"],"size":"1468192"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-548498 image ls --format json --alsologtostderr:
I0908 10:44:05.891034  308006 out.go:360] Setting OutFile to fd 1 ...
I0908 10:44:05.891282  308006 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:05.891290  308006 out.go:374] Setting ErrFile to fd 2...
I0908 10:44:05.891294  308006 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:05.891519  308006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
I0908 10:44:05.892098  308006 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:05.892186  308006 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:05.892577  308006 cli_runner.go:164] Run: docker container inspect functional-548498 --format={{.State.Status}}
I0908 10:44:05.911098  308006 ssh_runner.go:195] Run: systemctl --version
I0908 10:44:05.911164  308006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-548498
I0908 10:44:05.928578  308006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/functional-548498/id_rsa Username:docker}
I0908 10:44:06.012046  308006 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-548498 image ls --format yaml --alsologtostderr:
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
- docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7
repoTags:
- docker.io/library/nginx:latest
size: "196544386"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: edd2bc6de680bf61fc9529af97831a028583a90cba16b14d6f9ebae5793c8b33
repoDigests:
- localhost/minikube-local-cache-test@sha256:91345e7826771a0889b18e2227f07c78fdb3c0754569812a4f60393302debbbe
repoTags:
- localhost/minikube-local-cache-test:functional-548498
size: "3330"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a
repoTags:
- docker.io/library/nginx:alpine
size: "53949946"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-548498
size: "4943877"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-548498 image ls --format yaml --alsologtostderr:
I0908 10:44:01.345748  307411 out.go:360] Setting OutFile to fd 1 ...
I0908 10:44:01.346033  307411 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:01.346047  307411 out.go:374] Setting ErrFile to fd 2...
I0908 10:44:01.346053  307411 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:01.346356  307411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
I0908 10:44:01.347019  307411 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:01.347140  307411 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:01.347541  307411 cli_runner.go:164] Run: docker container inspect functional-548498 --format={{.State.Status}}
I0908 10:44:01.366894  307411 ssh_runner.go:195] Run: systemctl --version
I0908 10:44:01.366959  307411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-548498
I0908 10:44:01.386920  307411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/functional-548498/id_rsa Username:docker}
I0908 10:44:01.480384  307411 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 ssh pgrep buildkitd: exit status 1 (267.175794ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image build -t localhost/my-image:functional-548498 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 image build -t localhost/my-image:functional-548498 testdata/build --alsologtostderr: (3.819390102s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-548498 image build -t localhost/my-image:functional-548498 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> aea62dba91a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-548498
--> 5577093bd97
Successfully tagged localhost/my-image:functional-548498
5577093bd97e9c29bf1c0db45ba696f55b62fa2028d4b459f622653a221e5408
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-548498 image build -t localhost/my-image:functional-548498 testdata/build --alsologtostderr:
I0908 10:44:01.847426  307551 out.go:360] Setting OutFile to fd 1 ...
I0908 10:44:01.847610  307551 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:01.847621  307551 out.go:374] Setting ErrFile to fd 2...
I0908 10:44:01.847626  307551 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 10:44:01.847873  307551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
I0908 10:44:01.848509  307551 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:01.849181  307551 config.go:182] Loaded profile config "functional-548498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 10:44:01.849589  307551 cli_runner.go:164] Run: docker container inspect functional-548498 --format={{.State.Status}}
I0908 10:44:01.868517  307551 ssh_runner.go:195] Run: systemctl --version
I0908 10:44:01.868572  307551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-548498
I0908 10:44:01.889650  307551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/functional-548498/id_rsa Username:docker}
I0908 10:44:01.976463  307551 build_images.go:161] Building image from path: /tmp/build.3751382272.tar
I0908 10:44:01.976548  307551 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 10:44:01.987155  307551 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3751382272.tar
I0908 10:44:01.991203  307551 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3751382272.tar: stat -c "%s %y" /var/lib/minikube/build/build.3751382272.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3751382272.tar': No such file or directory
I0908 10:44:01.991242  307551 ssh_runner.go:362] scp /tmp/build.3751382272.tar --> /var/lib/minikube/build/build.3751382272.tar (3072 bytes)
I0908 10:44:02.021781  307551 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3751382272
I0908 10:44:02.032800  307551 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3751382272 -xf /var/lib/minikube/build/build.3751382272.tar
I0908 10:44:02.079878  307551 crio.go:315] Building image: /var/lib/minikube/build/build.3751382272
I0908 10:44:02.079995  307551 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-548498 /var/lib/minikube/build/build.3751382272 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0908 10:44:05.587929  307551 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-548498 /var/lib/minikube/build/build.3751382272 --cgroup-manager=cgroupfs: (3.507901376s)
I0908 10:44:05.588000  307551 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3751382272
I0908 10:44:05.596945  307551 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3751382272.tar
I0908 10:44:05.605484  307551 build_images.go:217] Built localhost/my-image:functional-548498 from /tmp/build.3751382272.tar
I0908 10:44:05.605526  307551 build_images.go:133] succeeded building to: functional-548498
I0908 10:44:05.605533  307551 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image ls
2025/09/08 10:44:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-548498
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image load --daemon kicbase/echo-server:functional-548498 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 image load --daemon kicbase/echo-server:functional-548498 --alsologtostderr: (1.309318463s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image load --daemon kicbase/echo-server:functional-548498 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 image load --daemon kicbase/echo-server:functional-548498 --alsologtostderr: (1.100129003s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-548498 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-548498 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-548498 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 301484: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-548498 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-548498 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-548498 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [04414cb8-dbee-4ba8-8b7d-eb46fe3dc486] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [04414cb8-dbee-4ba8-8b7d-eb46fe3dc486] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.003653293s
I0908 10:43:44.690461  264164 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-548498
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image load --daemon kicbase/echo-server:functional-548498 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image save kicbase/echo-server:functional-548498 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 image save kicbase/echo-server:functional-548498 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.695209389s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image rm kicbase/echo-server:functional-548498 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 image ls: (1.823927631s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-548498
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 image save --daemon kicbase/echo-server:functional-548498 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-548498
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548498 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.79.5 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-548498 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "331.982376ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "55.147171ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "325.906487ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "55.022068ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-548498 /tmp/TestFunctionalparallelMountCmdany-port3160208764/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757328229349300448" to /tmp/TestFunctionalparallelMountCmdany-port3160208764/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757328229349300448" to /tmp/TestFunctionalparallelMountCmdany-port3160208764/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757328229349300448" to /tmp/TestFunctionalparallelMountCmdany-port3160208764/001/test-1757328229349300448
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.765669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 10:43:49.624342  264164 retry.go:31] will retry after 369.598703ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 10:43 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 10:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 10:43 test-1757328229349300448
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh cat /mount-9p/test-1757328229349300448
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-548498 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [480935f6-0563-401e-aec5-553403db3a1a] Pending
helpers_test.go:352: "busybox-mount" [480935f6-0563-401e-aec5-553403db3a1a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [480935f6-0563-401e-aec5-553403db3a1a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [480935f6-0563-401e-aec5-553403db3a1a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003601888s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-548498 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-548498 /tmp/TestFunctionalparallelMountCmdany-port3160208764/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-548498 /tmp/TestFunctionalparallelMountCmdspecific-port1992184534/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (292.254105ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 10:43:57.183740  264164 retry.go:31] will retry after 683.215686ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-548498 /tmp/TestFunctionalparallelMountCmdspecific-port1992184534/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548498 ssh "sudo umount -f /mount-9p": exit status 1 (254.701443ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-548498 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-548498 /tmp/TestFunctionalparallelMountCmdspecific-port1992184534/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-548498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1206926474/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-548498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1206926474/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-548498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1206926474/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-548498 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-548498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1206926474/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-548498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1206926474/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-548498 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1206926474/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 service list: (1.689704064s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-548498 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-548498 service list -o json: (1.693786362s)
functional_test.go:1504: Took "1.693901036s" to run "out/minikube-linux-amd64 -p functional-548498 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-548498
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-548498
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-548498
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (172.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-902854 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m51.896494991s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (172.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-902854 kubectl -- rollout status deployment/busybox: (3.233304711s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-fxmzk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-gvx7n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-l9t4j -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-fxmzk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-gvx7n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-l9t4j -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-fxmzk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-gvx7n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-l9t4j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-fxmzk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-fxmzk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-gvx7n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-gvx7n -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-l9t4j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 kubectl -- exec busybox-7b57f96db7-l9t4j -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 node add --alsologtostderr -v 5
E0908 10:57:01.733461  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-902854 node add --alsologtostderr -v 5: (57.540448659s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-902854 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp testdata/cp-test.txt ha-902854:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile400568874/001/cp-test_ha-902854.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854:/home/docker/cp-test.txt ha-902854-m02:/home/docker/cp-test_ha-902854_ha-902854-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m02 "sudo cat /home/docker/cp-test_ha-902854_ha-902854-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854:/home/docker/cp-test.txt ha-902854-m03:/home/docker/cp-test_ha-902854_ha-902854-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m03 "sudo cat /home/docker/cp-test_ha-902854_ha-902854-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854:/home/docker/cp-test.txt ha-902854-m04:/home/docker/cp-test_ha-902854_ha-902854-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m04 "sudo cat /home/docker/cp-test_ha-902854_ha-902854-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp testdata/cp-test.txt ha-902854-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile400568874/001/cp-test_ha-902854-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m02:/home/docker/cp-test.txt ha-902854:/home/docker/cp-test_ha-902854-m02_ha-902854.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854 "sudo cat /home/docker/cp-test_ha-902854-m02_ha-902854.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m02:/home/docker/cp-test.txt ha-902854-m03:/home/docker/cp-test_ha-902854-m02_ha-902854-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m03 "sudo cat /home/docker/cp-test_ha-902854-m02_ha-902854-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m02:/home/docker/cp-test.txt ha-902854-m04:/home/docker/cp-test_ha-902854-m02_ha-902854-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m04 "sudo cat /home/docker/cp-test_ha-902854-m02_ha-902854-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp testdata/cp-test.txt ha-902854-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile400568874/001/cp-test_ha-902854-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m03:/home/docker/cp-test.txt ha-902854:/home/docker/cp-test_ha-902854-m03_ha-902854.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854 "sudo cat /home/docker/cp-test_ha-902854-m03_ha-902854.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m03:/home/docker/cp-test.txt ha-902854-m02:/home/docker/cp-test_ha-902854-m03_ha-902854-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m02 "sudo cat /home/docker/cp-test_ha-902854-m03_ha-902854-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m03:/home/docker/cp-test.txt ha-902854-m04:/home/docker/cp-test_ha-902854-m03_ha-902854-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m04 "sudo cat /home/docker/cp-test_ha-902854-m03_ha-902854-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp testdata/cp-test.txt ha-902854-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile400568874/001/cp-test_ha-902854-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m04:/home/docker/cp-test.txt ha-902854:/home/docker/cp-test_ha-902854-m04_ha-902854.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854 "sudo cat /home/docker/cp-test_ha-902854-m04_ha-902854.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m04:/home/docker/cp-test.txt ha-902854-m02:/home/docker/cp-test_ha-902854-m04_ha-902854-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m02 "sudo cat /home/docker/cp-test_ha-902854-m04_ha-902854-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 cp ha-902854-m04:/home/docker/cp-test.txt ha-902854-m03:/home/docker/cp-test_ha-902854-m04_ha-902854-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 ssh -n ha-902854-m03 "sudo cat /home/docker/cp-test_ha-902854-m04_ha-902854-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-902854 node stop m02 --alsologtostderr -v 5: (11.89852851s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-902854 status --alsologtostderr -v 5: exit status 7 (683.668627ms)

                                                
                                                
-- stdout --
	ha-902854
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-902854-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-902854-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-902854-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 10:58:22.948480  333022 out.go:360] Setting OutFile to fd 1 ...
	I0908 10:58:22.948744  333022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:58:22.948753  333022 out.go:374] Setting ErrFile to fd 2...
	I0908 10:58:22.948757  333022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 10:58:22.948983  333022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
	I0908 10:58:22.949172  333022 out.go:368] Setting JSON to false
	I0908 10:58:22.949206  333022 mustload.go:65] Loading cluster: ha-902854
	I0908 10:58:22.949326  333022 notify.go:220] Checking for updates...
	I0908 10:58:22.949589  333022 config.go:182] Loaded profile config "ha-902854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 10:58:22.949614  333022 status.go:174] checking status of ha-902854 ...
	I0908 10:58:22.951795  333022 cli_runner.go:164] Run: docker container inspect ha-902854 --format={{.State.Status}}
	I0908 10:58:22.970234  333022 status.go:371] ha-902854 host status = "Running" (err=<nil>)
	I0908 10:58:22.970279  333022 host.go:66] Checking if "ha-902854" exists ...
	I0908 10:58:22.970582  333022 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-902854
	I0908 10:58:22.989485  333022 host.go:66] Checking if "ha-902854" exists ...
	I0908 10:58:22.989780  333022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 10:58:22.989881  333022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-902854
	I0908 10:58:23.008740  333022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/ha-902854/id_rsa Username:docker}
	I0908 10:58:23.093622  333022 ssh_runner.go:195] Run: systemctl --version
	I0908 10:58:23.098150  333022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 10:58:23.111269  333022 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 10:58:23.165873  333022 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 10:58:23.155272226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 10:58:23.166535  333022 kubeconfig.go:125] found "ha-902854" server: "https://192.168.49.254:8443"
	I0908 10:58:23.166572  333022 api_server.go:166] Checking apiserver status ...
	I0908 10:58:23.166612  333022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 10:58:23.178203  333022 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1587/cgroup
	I0908 10:58:23.187566  333022 api_server.go:182] apiserver freezer: "6:freezer:/docker/56b7c356479ea0cf1bf1970a35e7c722bf9ec96555eb706921d8813f2dce2ed6/crio/crio-b6037d82afc51abc6862aeb7d133ee75518c89475821b183902cb258ba85fa51"
	I0908 10:58:23.187686  333022 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/56b7c356479ea0cf1bf1970a35e7c722bf9ec96555eb706921d8813f2dce2ed6/crio/crio-b6037d82afc51abc6862aeb7d133ee75518c89475821b183902cb258ba85fa51/freezer.state
	I0908 10:58:23.196362  333022 api_server.go:204] freezer state: "THAWED"
	I0908 10:58:23.196392  333022 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 10:58:23.200818  333022 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 10:58:23.200846  333022 status.go:463] ha-902854 apiserver status = Running (err=<nil>)
	I0908 10:58:23.200858  333022 status.go:176] ha-902854 status: &{Name:ha-902854 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 10:58:23.200881  333022 status.go:174] checking status of ha-902854-m02 ...
	I0908 10:58:23.201124  333022 cli_runner.go:164] Run: docker container inspect ha-902854-m02 --format={{.State.Status}}
	I0908 10:58:23.219532  333022 status.go:371] ha-902854-m02 host status = "Stopped" (err=<nil>)
	I0908 10:58:23.219573  333022 status.go:384] host is not running, skipping remaining checks
	I0908 10:58:23.219581  333022 status.go:176] ha-902854-m02 status: &{Name:ha-902854-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 10:58:23.219616  333022 status.go:174] checking status of ha-902854-m03 ...
	I0908 10:58:23.219932  333022 cli_runner.go:164] Run: docker container inspect ha-902854-m03 --format={{.State.Status}}
	I0908 10:58:23.238572  333022 status.go:371] ha-902854-m03 host status = "Running" (err=<nil>)
	I0908 10:58:23.238600  333022 host.go:66] Checking if "ha-902854-m03" exists ...
	I0908 10:58:23.238856  333022 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-902854-m03
	I0908 10:58:23.257727  333022 host.go:66] Checking if "ha-902854-m03" exists ...
	I0908 10:58:23.258013  333022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 10:58:23.258049  333022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-902854-m03
	I0908 10:58:23.277558  333022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/ha-902854-m03/id_rsa Username:docker}
	I0908 10:58:23.365629  333022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 10:58:23.379110  333022 kubeconfig.go:125] found "ha-902854" server: "https://192.168.49.254:8443"
	I0908 10:58:23.379161  333022 api_server.go:166] Checking apiserver status ...
	I0908 10:58:23.379196  333022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 10:58:23.391127  333022 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	I0908 10:58:23.402419  333022 api_server.go:182] apiserver freezer: "6:freezer:/docker/c8ea0c1ef88312ffcf46d51a5bd7d519ccc042a29dec5f9938c7769b29303863/crio/crio-bbe75dcfb29acefdac8f7e2ea2b1ca7e484706d00ff8c4afc95ea92bab20251d"
	I0908 10:58:23.402505  333022 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c8ea0c1ef88312ffcf46d51a5bd7d519ccc042a29dec5f9938c7769b29303863/crio/crio-bbe75dcfb29acefdac8f7e2ea2b1ca7e484706d00ff8c4afc95ea92bab20251d/freezer.state
	I0908 10:58:23.412487  333022 api_server.go:204] freezer state: "THAWED"
	I0908 10:58:23.412526  333022 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 10:58:23.417967  333022 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 10:58:23.418003  333022 status.go:463] ha-902854-m03 apiserver status = Running (err=<nil>)
	I0908 10:58:23.418015  333022 status.go:176] ha-902854-m03 status: &{Name:ha-902854-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 10:58:23.418056  333022 status.go:174] checking status of ha-902854-m04 ...
	I0908 10:58:23.418409  333022 cli_runner.go:164] Run: docker container inspect ha-902854-m04 --format={{.State.Status}}
	I0908 10:58:23.437884  333022 status.go:371] ha-902854-m04 host status = "Running" (err=<nil>)
	I0908 10:58:23.437912  333022 host.go:66] Checking if "ha-902854-m04" exists ...
	I0908 10:58:23.438234  333022 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-902854-m04
	I0908 10:58:23.457912  333022 host.go:66] Checking if "ha-902854-m04" exists ...
	I0908 10:58:23.458239  333022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 10:58:23.458278  333022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-902854-m04
	I0908 10:58:23.477904  333022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/ha-902854-m04/id_rsa Username:docker}
	I0908 10:58:23.565424  333022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 10:58:23.578124  333022 status.go:176] ha-902854-m04 status: &{Name:ha-902854-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 node start m02 --alsologtostderr -v 5
E0908 10:58:24.801278  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:25.346240  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:25.352728  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:25.364238  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:25.385707  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:25.427209  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:25.508686  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:25.670224  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:25.991835  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:26.633909  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:27.915570  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:30.477894  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 10:58:35.600241  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-902854 node start m02 --alsologtostderr -v 5: (19.621575827s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (145.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 node list --alsologtostderr -v 5
E0908 10:58:45.842110  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 stop --alsologtostderr -v 5
E0908 10:59:06.324013  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-902854 stop --alsologtostderr -v 5: (36.838860962s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 start --wait true --alsologtostderr -v 5
E0908 10:59:47.287194  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:01:09.209760  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-902854 start --wait true --alsologtostderr -v 5: (1m48.600365142s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (145.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-902854 node delete m03 --alsologtostderr -v 5: (10.665628359s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-902854 stop --alsologtostderr -v 5: (35.633184281s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-902854 status --alsologtostderr -v 5: exit status 7 (109.00533ms)

                                                
                                                
-- stdout --
	ha-902854
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-902854-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-902854-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:01:59.167979  349800 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:01:59.168472  349800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:01:59.168539  349800 out.go:374] Setting ErrFile to fd 2...
	I0908 11:01:59.168562  349800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:01:59.169054  349800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
	I0908 11:01:59.169526  349800 out.go:368] Setting JSON to false
	I0908 11:01:59.169624  349800 mustload.go:65] Loading cluster: ha-902854
	I0908 11:01:59.169647  349800 notify.go:220] Checking for updates...
	I0908 11:01:59.170385  349800 config.go:182] Loaded profile config "ha-902854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:01:59.170418  349800 status.go:174] checking status of ha-902854 ...
	I0908 11:01:59.170935  349800 cli_runner.go:164] Run: docker container inspect ha-902854 --format={{.State.Status}}
	I0908 11:01:59.188808  349800 status.go:371] ha-902854 host status = "Stopped" (err=<nil>)
	I0908 11:01:59.188851  349800 status.go:384] host is not running, skipping remaining checks
	I0908 11:01:59.188860  349800 status.go:176] ha-902854 status: &{Name:ha-902854 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:01:59.188914  349800 status.go:174] checking status of ha-902854-m02 ...
	I0908 11:01:59.189236  349800 cli_runner.go:164] Run: docker container inspect ha-902854-m02 --format={{.State.Status}}
	I0908 11:01:59.207393  349800 status.go:371] ha-902854-m02 host status = "Stopped" (err=<nil>)
	I0908 11:01:59.207439  349800 status.go:384] host is not running, skipping remaining checks
	I0908 11:01:59.207454  349800 status.go:176] ha-902854-m02 status: &{Name:ha-902854-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:01:59.207495  349800 status.go:174] checking status of ha-902854-m04 ...
	I0908 11:01:59.207896  349800 cli_runner.go:164] Run: docker container inspect ha-902854-m04 --format={{.State.Status}}
	I0908 11:01:59.225850  349800 status.go:371] ha-902854-m04 host status = "Stopped" (err=<nil>)
	I0908 11:01:59.225875  349800 status.go:384] host is not running, skipping remaining checks
	I0908 11:01:59.225881  349800 status.go:176] ha-902854-m04 status: &{Name:ha-902854-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0908 11:02:01.734567  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-902854 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (57.371435814s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 node add --control-plane --alsologtostderr -v 5
E0908 11:03:25.347901  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:03:53.051897  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-902854 node add --control-plane --alsologtostderr -v 5: (1m16.547211627s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-902854 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (72.09s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-836946 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-836946 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m12.084625899s)
--- PASS: TestJSONOutput/start/Command (72.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-836946 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-836946 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-836946 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-836946 --output=json --user=testUser: (5.813335147s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-247517 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-247517 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.61704ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d3741ba-9deb-4186-9c17-516a8ccbb95e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-247517] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1bf0e12-3815-49cb-a14c-b1e45a12fefe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21503"}}
	{"specversion":"1.0","id":"1c30ec1a-5b9f-4a7a-88e9-4ec1312cf90f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"418c0579-029c-4b97-9302-e45dddf2a9dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig"}}
	{"specversion":"1.0","id":"ab3c95e8-83e9-458b-acf3-c682fb5f4238","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube"}}
	{"specversion":"1.0","id":"615a4fd0-839b-4559-b013-9cc938d42169","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5d439605-9a4c-48f8-93bd-5318c12bc13f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a780be09-cb99-4ab9-9824-175091ceecf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-247517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-247517
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-200849 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-200849 --network=: (27.902960327s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-200849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-200849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-200849: (2.124299661s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.05s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-946728 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-946728 --network=bridge: (23.066307555s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-946728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-946728
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-946728: (1.986708956s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.07s)

                                                
                                    
x
+
TestKicExistingNetwork (25.36s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 11:06:43.411443  264164 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 11:06:43.429773  264164 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 11:06:43.429864  264164 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 11:06:43.429882  264164 cli_runner.go:164] Run: docker network inspect existing-network
W0908 11:06:43.447221  264164 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 11:06:43.447257  264164 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 11:06:43.447280  264164 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 11:06:43.447419  264164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 11:06:43.465586  264164 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dd66b88f88bf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:ed:52:e2:fa:d2} reservation:<nil>}
I0908 11:06:43.466068  264164 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ad1400}
I0908 11:06:43.466111  264164 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 11:06:43.466162  264164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 11:06:43.521824  264164 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-848417 --network=existing-network
E0908 11:07:01.738454  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-848417 --network=existing-network: (23.249495616s)
helpers_test.go:175: Cleaning up "existing-network-848417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-848417
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-848417: (1.960104717s)
I0908 11:07:08.751707  264164 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.36s)

                                                
                                    
x
+
TestKicCustomSubnet (28.15s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-079365 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-079365 --subnet=192.168.60.0/24: (26.007498499s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-079365 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-079365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-079365
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-079365: (2.126796093s)
--- PASS: TestKicCustomSubnet (28.15s)

                                                
                                    
x
+
TestKicStaticIP (27.71s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-187268 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-187268 --static-ip=192.168.200.200: (25.454238612s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-187268 ip
helpers_test.go:175: Cleaning up "static-ip-187268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-187268
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-187268: (2.120887051s)
--- PASS: TestKicStaticIP (27.71s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (56.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-326900 --driver=docker  --container-runtime=crio
E0908 11:08:25.348686  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-326900 --driver=docker  --container-runtime=crio: (25.104095001s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-349507 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-349507 --driver=docker  --container-runtime=crio: (26.095162846s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-326900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-349507
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-349507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-349507
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-349507: (1.883911569s)
helpers_test.go:175: Cleaning up "first-326900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-326900
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-326900: (1.883060125s)
--- PASS: TestMinikubeProfile (56.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-471216 --memory=3072 --mount-string /tmp/TestMountStartserial1535771863/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-471216 --memory=3072 --mount-string /tmp/TestMountStartserial1535771863/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.075930709s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-471216 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-490380 --memory=3072 --mount-string /tmp/TestMountStartserial1535771863/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-490380 --memory=3072 --mount-string /tmp/TestMountStartserial1535771863/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.452225415s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-490380 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-471216 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-471216 --alsologtostderr -v=5: (1.64031019s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-490380 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-490380
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-490380: (1.188194081s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-490380
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-490380: (6.230799249s)
--- PASS: TestMountStart/serial/RestartStopped (7.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-490380 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-394266 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-394266 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m6.90325684s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-394266 -- rollout status deployment/busybox: (3.433397827s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- exec busybox-7b57f96db7-dsf55 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- exec busybox-7b57f96db7-h98xb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- exec busybox-7b57f96db7-dsf55 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- exec busybox-7b57f96db7-h98xb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- exec busybox-7b57f96db7-dsf55 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- exec busybox-7b57f96db7-h98xb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- exec busybox-7b57f96db7-dsf55 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- exec busybox-7b57f96db7-dsf55 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- exec busybox-7b57f96db7-h98xb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394266 -- exec busybox-7b57f96db7-h98xb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-394266 -v=5 --alsologtostderr
E0908 11:12:01.733353  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-394266 -v=5 --alsologtostderr: (56.378249668s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-394266 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp testdata/cp-test.txt multinode-394266:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp multinode-394266:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2473645207/001/cp-test_multinode-394266.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp multinode-394266:/home/docker/cp-test.txt multinode-394266-m02:/home/docker/cp-test_multinode-394266_multinode-394266-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m02 "sudo cat /home/docker/cp-test_multinode-394266_multinode-394266-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp multinode-394266:/home/docker/cp-test.txt multinode-394266-m03:/home/docker/cp-test_multinode-394266_multinode-394266-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m03 "sudo cat /home/docker/cp-test_multinode-394266_multinode-394266-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp testdata/cp-test.txt multinode-394266-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp multinode-394266-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2473645207/001/cp-test_multinode-394266-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp multinode-394266-m02:/home/docker/cp-test.txt multinode-394266:/home/docker/cp-test_multinode-394266-m02_multinode-394266.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266 "sudo cat /home/docker/cp-test_multinode-394266-m02_multinode-394266.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp multinode-394266-m02:/home/docker/cp-test.txt multinode-394266-m03:/home/docker/cp-test_multinode-394266-m02_multinode-394266-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m03 "sudo cat /home/docker/cp-test_multinode-394266-m02_multinode-394266-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp testdata/cp-test.txt multinode-394266-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp multinode-394266-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2473645207/001/cp-test_multinode-394266-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp multinode-394266-m03:/home/docker/cp-test.txt multinode-394266:/home/docker/cp-test_multinode-394266-m03_multinode-394266.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266 "sudo cat /home/docker/cp-test_multinode-394266-m03_multinode-394266.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 cp multinode-394266-m03:/home/docker/cp-test.txt multinode-394266-m02:/home/docker/cp-test_multinode-394266-m03_multinode-394266-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 ssh -n multinode-394266-m02 "sudo cat /home/docker/cp-test_multinode-394266-m03_multinode-394266-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-394266 node stop m03: (1.193618227s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-394266 status: exit status 7 (471.020024ms)

                                                
                                                
-- stdout --
	multinode-394266
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-394266-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-394266-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-394266 status --alsologtostderr: exit status 7 (469.707002ms)

                                                
                                                
-- stdout --
	multinode-394266
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-394266-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-394266-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:12:49.206673  414623 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:12:49.206838  414623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:12:49.206852  414623 out.go:374] Setting ErrFile to fd 2...
	I0908 11:12:49.206859  414623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:12:49.207115  414623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
	I0908 11:12:49.207309  414623 out.go:368] Setting JSON to false
	I0908 11:12:49.207342  414623 mustload.go:65] Loading cluster: multinode-394266
	I0908 11:12:49.207502  414623 notify.go:220] Checking for updates...
	I0908 11:12:49.207789  414623 config.go:182] Loaded profile config "multinode-394266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:12:49.207815  414623 status.go:174] checking status of multinode-394266 ...
	I0908 11:12:49.208325  414623 cli_runner.go:164] Run: docker container inspect multinode-394266 --format={{.State.Status}}
	I0908 11:12:49.228328  414623 status.go:371] multinode-394266 host status = "Running" (err=<nil>)
	I0908 11:12:49.228366  414623 host.go:66] Checking if "multinode-394266" exists ...
	I0908 11:12:49.228666  414623 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-394266
	I0908 11:12:49.246950  414623 host.go:66] Checking if "multinode-394266" exists ...
	I0908 11:12:49.247345  414623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:12:49.247413  414623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-394266
	I0908 11:12:49.266941  414623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/multinode-394266/id_rsa Username:docker}
	I0908 11:12:49.353130  414623 ssh_runner.go:195] Run: systemctl --version
	I0908 11:12:49.357443  414623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:12:49.369493  414623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:12:49.420747  414623 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-09-08 11:12:49.411374045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:12:49.421286  414623 kubeconfig.go:125] found "multinode-394266" server: "https://192.168.67.2:8443"
	I0908 11:12:49.421315  414623 api_server.go:166] Checking apiserver status ...
	I0908 11:12:49.421360  414623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:12:49.432447  414623 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1500/cgroup
	I0908 11:12:49.441933  414623 api_server.go:182] apiserver freezer: "6:freezer:/docker/3fff4fa15285724b42ecdd51ffcd58321783414e474baf764e65732c4c73358f/crio/crio-d52cef18dbd8421405d14936a25e6babe77ffec0b30f9ea12daa884d89a07478"
	I0908 11:12:49.442019  414623 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3fff4fa15285724b42ecdd51ffcd58321783414e474baf764e65732c4c73358f/crio/crio-d52cef18dbd8421405d14936a25e6babe77ffec0b30f9ea12daa884d89a07478/freezer.state
	I0908 11:12:49.450819  414623 api_server.go:204] freezer state: "THAWED"
	I0908 11:12:49.450861  414623 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 11:12:49.455025  414623 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 11:12:49.455049  414623 status.go:463] multinode-394266 apiserver status = Running (err=<nil>)
	I0908 11:12:49.455075  414623 status.go:176] multinode-394266 status: &{Name:multinode-394266 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:12:49.455101  414623 status.go:174] checking status of multinode-394266-m02 ...
	I0908 11:12:49.455433  414623 cli_runner.go:164] Run: docker container inspect multinode-394266-m02 --format={{.State.Status}}
	I0908 11:12:49.472540  414623 status.go:371] multinode-394266-m02 host status = "Running" (err=<nil>)
	I0908 11:12:49.472572  414623 host.go:66] Checking if "multinode-394266-m02" exists ...
	I0908 11:12:49.472825  414623 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-394266-m02
	I0908 11:12:49.490881  414623 host.go:66] Checking if "multinode-394266-m02" exists ...
	I0908 11:12:49.491171  414623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:12:49.491212  414623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-394266-m02
	I0908 11:12:49.508834  414623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21503-260352/.minikube/machines/multinode-394266-m02/id_rsa Username:docker}
	I0908 11:12:49.593124  414623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:12:49.604694  414623 status.go:176] multinode-394266-m02 status: &{Name:multinode-394266-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:12:49.604733  414623 status.go:174] checking status of multinode-394266-m03 ...
	I0908 11:12:49.605005  414623 cli_runner.go:164] Run: docker container inspect multinode-394266-m03 --format={{.State.Status}}
	I0908 11:12:49.622091  414623 status.go:371] multinode-394266-m03 host status = "Stopped" (err=<nil>)
	I0908 11:12:49.622127  414623 status.go:384] host is not running, skipping remaining checks
	I0908 11:12:49.622137  414623 status.go:176] multinode-394266-m03 status: &{Name:multinode-394266-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-394266 node start m03 -v=5 --alsologtostderr: (6.694017698s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-394266
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-394266
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-394266: (24.759264231s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-394266 --wait=true -v=5 --alsologtostderr
E0908 11:13:25.348579  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-394266 --wait=true -v=5 --alsologtostderr: (49.85740161s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-394266
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-394266 node delete m03: (4.775848271s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-394266 stop: (23.697089674s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-394266 status: exit status 7 (92.331323ms)

                                                
                                                
-- stdout --
	multinode-394266
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-394266-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-394266 status --alsologtostderr: exit status 7 (91.470467ms)

                                                
                                                
-- stdout --
	multinode-394266
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-394266-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:14:40.940287  424281 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:14:40.940409  424281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:14:40.940424  424281 out.go:374] Setting ErrFile to fd 2...
	I0908 11:14:40.940430  424281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:14:40.940670  424281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
	I0908 11:14:40.940853  424281 out.go:368] Setting JSON to false
	I0908 11:14:40.940883  424281 mustload.go:65] Loading cluster: multinode-394266
	I0908 11:14:40.940968  424281 notify.go:220] Checking for updates...
	I0908 11:14:40.941279  424281 config.go:182] Loaded profile config "multinode-394266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:14:40.941302  424281 status.go:174] checking status of multinode-394266 ...
	I0908 11:14:40.941740  424281 cli_runner.go:164] Run: docker container inspect multinode-394266 --format={{.State.Status}}
	I0908 11:14:40.960973  424281 status.go:371] multinode-394266 host status = "Stopped" (err=<nil>)
	I0908 11:14:40.960999  424281 status.go:384] host is not running, skipping remaining checks
	I0908 11:14:40.961006  424281 status.go:176] multinode-394266 status: &{Name:multinode-394266 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:14:40.961032  424281 status.go:174] checking status of multinode-394266-m02 ...
	I0908 11:14:40.961337  424281 cli_runner.go:164] Run: docker container inspect multinode-394266-m02 --format={{.State.Status}}
	I0908 11:14:40.980552  424281 status.go:371] multinode-394266-m02 host status = "Stopped" (err=<nil>)
	I0908 11:14:40.980616  424281 status.go:384] host is not running, skipping remaining checks
	I0908 11:14:40.980627  424281 status.go:176] multinode-394266-m02 status: &{Name:multinode-394266-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-394266 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0908 11:14:48.414100  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:15:04.802774  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-394266 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.968371686s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394266 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.57s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-394266
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-394266-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-394266-m02 --driver=docker  --container-runtime=crio: exit status 14 (72.207642ms)

                                                
                                                
-- stdout --
	* [multinode-394266-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-394266-m02' is duplicated with machine name 'multinode-394266-m02' in profile 'multinode-394266'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-394266-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-394266-m03 --driver=docker  --container-runtime=crio: (22.179783645s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-394266
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-394266: exit status 80 (276.870589ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-394266 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-394266-m03 already exists in multinode-394266-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-394266-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-394266-m03: (1.886945316s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.47s)

                                                
                                    
x
+
TestPreload (116.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-394548 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-394548 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (51.076898436s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-394548 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-394548 image pull gcr.io/k8s-minikube/busybox: (2.594360414s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-394548
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-394548: (5.811322097s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-394548 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0908 11:17:01.733639  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-394548 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.764096322s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-394548 image list
helpers_test.go:175: Cleaning up "test-preload-394548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-394548
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-394548: (2.317506487s)
--- PASS: TestPreload (116.79s)

                                                
                                    
x
+
TestScheduledStopUnix (103.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-906582 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-906582 --memory=3072 --driver=docker  --container-runtime=crio: (27.201422073s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-906582 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-906582 -n scheduled-stop-906582
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-906582 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 11:18:21.464687  264164 retry.go:31] will retry after 98.002µs: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.465874  264164 retry.go:31] will retry after 193.769µs: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.467018  264164 retry.go:31] will retry after 301.064µs: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.468250  264164 retry.go:31] will retry after 392.551µs: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.469401  264164 retry.go:31] will retry after 586.781µs: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.470580  264164 retry.go:31] will retry after 645.64µs: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.471761  264164 retry.go:31] will retry after 608.945µs: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.472943  264164 retry.go:31] will retry after 2.500945ms: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.476248  264164 retry.go:31] will retry after 2.713741ms: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.479514  264164 retry.go:31] will retry after 5.568434ms: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.485800  264164 retry.go:31] will retry after 4.240784ms: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.491158  264164 retry.go:31] will retry after 8.059446ms: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.499400  264164 retry.go:31] will retry after 6.567142ms: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.506741  264164 retry.go:31] will retry after 22.509354ms: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.530047  264164 retry.go:31] will retry after 17.441669ms: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
I0908 11:18:21.548389  264164 retry.go:31] will retry after 43.118554ms: open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/scheduled-stop-906582/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-906582 --cancel-scheduled
E0908 11:18:25.347887  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-906582 -n scheduled-stop-906582
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-906582
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-906582 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-906582
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-906582: exit status 7 (73.729464ms)

                                                
                                                
-- stdout --
	scheduled-stop-906582
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-906582 -n scheduled-stop-906582
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-906582 -n scheduled-stop-906582: exit status 7 (73.400742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-906582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-906582
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-906582: (4.570629292s)
--- PASS: TestScheduledStopUnix (103.18s)

                                                
                                    
x
+
TestInsufficientStorage (12.6s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-484944 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-484944 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.2014032s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"696b8364-f39d-4007-a81b-26b3294eaf17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-484944] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e2aa8b2-fa4a-47a5-84e0-89bdf665f0d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21503"}}
	{"specversion":"1.0","id":"30fd4219-5584-44ac-b1d1-5d2d51fb389a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f2846f29-ccd4-4fe1-8a8b-e7de61290ffb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig"}}
	{"specversion":"1.0","id":"621fa378-ca51-48f2-98dd-7fd7511cfa2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube"}}
	{"specversion":"1.0","id":"c8ac5e94-3735-4254-a443-d807d87cd951","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"661522f8-00fc-4338-bff8-c934511701d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"848d32cf-4350-42fc-8f67-082c1f7210fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c7db6087-2ac4-4562-ba14-4d9ab13e9dd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7a16b620-455b-4ebd-87c2-677ae7a8936c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"59f1d145-23bc-498c-9eb3-660aca9110a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e4c50e5b-e1bd-4c44-9c46-71faec9e3130","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-484944\" primary control-plane node in \"insufficient-storage-484944\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"41ff031c-19ec-4997-9e24-28e118c7fafd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"88fa752f-24bf-49c0-8cec-6ae7874a86b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c9cc1eb-a1ae-4008-ad21-b040d41f162b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-484944 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-484944 --output=json --layout=cluster: exit status 7 (274.4501ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-484944","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-484944","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 11:19:47.482544  446586 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-484944" does not appear in /home/jenkins/minikube-integration/21503-260352/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-484944 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-484944 --output=json --layout=cluster: exit status 7 (267.896907ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-484944","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-484944","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 11:19:47.751882  446686 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-484944" does not appear in /home/jenkins/minikube-integration/21503-260352/kubeconfig
	E0908 11:19:47.762609  446686 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/insufficient-storage-484944/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-484944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-484944
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-484944: (1.850955028s)
--- PASS: TestInsufficientStorage (12.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (46.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.732469941 start -p running-upgrade-842353 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.732469941 start -p running-upgrade-842353 --memory=3072 --vm-driver=docker  --container-runtime=crio: (25.977311264s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-842353 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-842353 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.052791395s)
helpers_test.go:175: Cleaning up "running-upgrade-842353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-842353
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-842353: (2.009290613s)
--- PASS: TestRunningBinaryUpgrade (46.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (331.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-939977 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-939977 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.373426731s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-939977
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-939977: (1.571316997s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-939977 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-939977 status --format={{.Host}}: exit status 7 (93.748261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-939977 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-939977 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.867503832s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-939977 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-939977 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-939977 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (94.893191ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-939977] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-939977
	    minikube start -p kubernetes-upgrade-939977 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9399772 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-939977 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-939977 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-939977 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.967310552s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-939977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-939977
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-939977: (2.612895267s)
--- PASS: TestKubernetesUpgrade (331.66s)

                                                
                                    
x
+
TestMissingContainerUpgrade (79.21s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3913117343 start -p missing-upgrade-495755 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3913117343 start -p missing-upgrade-495755 --memory=3072 --driver=docker  --container-runtime=crio: (27.317970398s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-495755
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-495755: (2.345316481s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-495755
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-495755 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0908 11:22:01.733472  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-495755 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.911336864s)
helpers_test.go:175: Cleaning up "missing-upgrade-495755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-495755
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-495755: (1.975859288s)
--- PASS: TestMissingContainerUpgrade (79.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-376439 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-376439 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (83.321675ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-376439] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-376439 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-376439 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.203184404s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-376439 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (66.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3861910683 start -p stopped-upgrade-396271 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3861910683 start -p stopped-upgrade-396271 --memory=3072 --vm-driver=docker  --container-runtime=crio: (48.468747516s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3861910683 -p stopped-upgrade-396271 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3861910683 -p stopped-upgrade-396271 stop: (1.222110385s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-396271 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-396271 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.800594337s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (66.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-211312 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-211312 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (817.94039ms)

                                                
                                                
-- stdout --
	* [false-211312] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21503
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:19:54.326745  448628 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:19:54.326856  448628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:19:54.326909  448628 out.go:374] Setting ErrFile to fd 2...
	I0908 11:19:54.326922  448628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:19:54.327135  448628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21503-260352/.minikube/bin
	I0908 11:19:54.327952  448628 out.go:368] Setting JSON to false
	I0908 11:19:54.329254  448628 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7338,"bootTime":1757323056,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:19:54.329332  448628 start.go:140] virtualization: kvm guest
	I0908 11:19:54.331832  448628 out.go:179] * [false-211312] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:19:54.333312  448628 out.go:179]   - MINIKUBE_LOCATION=21503
	I0908 11:19:54.333326  448628 notify.go:220] Checking for updates...
	I0908 11:19:54.334562  448628 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:19:54.335761  448628 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21503-260352/kubeconfig
	I0908 11:19:54.337313  448628 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21503-260352/.minikube
	I0908 11:19:54.342960  448628 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:19:54.370353  448628 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:19:54.433818  448628 config.go:182] Loaded profile config "NoKubernetes-376439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:19:54.434022  448628 config.go:182] Loaded profile config "offline-crio-349159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:19:54.434179  448628 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:19:54.463299  448628 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:19:54.463453  448628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:19:54.531233  448628 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-09-08 11:19:54.519694969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:19:54.531375  448628 docker.go:318] overlay module found
	I0908 11:19:54.588612  448628 out.go:179] * Using the docker driver based on user configuration
	I0908 11:19:54.663296  448628 start.go:304] selected driver: docker
	I0908 11:19:54.663330  448628 start.go:918] validating driver "docker" against <nil>
	I0908 11:19:54.663356  448628 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:19:54.756086  448628 out.go:203] 
	W0908 11:19:54.913488  448628 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0908 11:19:54.996833  448628 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-211312 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-211312" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-211312" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-211312

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211312"

                                                
                                                
----------------------- debugLogs end: false-211312 [took: 8.671704219s] --------------------------------
helpers_test.go:175: Cleaning up "false-211312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-211312
--- PASS: TestNetworkPlugins/group/false (9.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-376439 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-376439 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.590031234s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-376439 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-376439 status -o json: exit status 2 (320.299291ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-376439","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-376439
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-376439: (2.059422854s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-376439 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-376439 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.049532214s)
--- PASS: TestNoKubernetes/serial/Start (8.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-396271
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-396271: (1.016533019s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-376439 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-376439 "sudo systemctl is-active --quiet service kubelet": exit status 1 (318.21629ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (5.708973609s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (6.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-376439
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-376439: (1.215022806s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-376439 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-376439 --driver=docker  --container-runtime=crio: (9.343580746s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-376439 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-376439 "sudo systemctl is-active --quiet service kubelet": exit status 1 (371.928395ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestPause/serial/Start (77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-107187 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-107187 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m17.002048841s)
--- PASS: TestPause/serial/Start (77.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m10.995961438s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.00s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-107187 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0908 11:23:25.346606  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-107187 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.04262478s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-211312 "pgrep -a kubelet"
I0908 11:23:51.629638  264164 config.go:182] Loaded profile config "auto-211312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-211312 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t5cwx" [7b97a137-3ba3-47cf-8f08-41815b616eff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t5cwx" [7b97a137-3ba3-47cf-8f08-41815b616eff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004348433s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-211312 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-107187 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-107187 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-107187 --output=json --layout=cluster: exit status 2 (325.004402ms)

                                                
                                                
-- stdout --
	{"Name":"pause-107187","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-107187","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-107187 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-107187 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-107187 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-107187 --alsologtostderr -v=5: (2.783167866s)
--- PASS: TestPause/serial/DeletePaused (2.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (30.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (30.528608384s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-107187
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-107187: exit status 1 (17.810916ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-107187: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (30.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (44.524049309s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (59.326805437s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (52.169117237s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-h86px" [6cf112d4-4dc9-4c02-8eee-321afcb77ed8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003708194s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-211312 "pgrep -a kubelet"
I0908 11:25:11.853893  264164 config.go:182] Loaded profile config "kindnet-211312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-211312 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m954h" [7000f2f8-edeb-4360-8bdd-7a3d34d698d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m954h" [7000f2f8-edeb-4360-8bdd-7a3d34d698d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004494785s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-211312 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m5.96745084s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-211312 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-s5rkt" [1d8ca8e8-74c8-4376-bdc4-f9ea68892dea] Running
I0908 11:25:50.366583  264164 config.go:182] Loaded profile config "custom-flannel-211312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003877295s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-211312 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fblqx" [30768522-1277-47aa-ab51-b539e1ca6349] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fblqx" [30768522-1277-47aa-ab51-b539e1ca6349] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004230356s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-211312 "pgrep -a kubelet"
I0908 11:25:56.362501  264164 config.go:182] Loaded profile config "calico-211312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-211312 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c7sdv" [ee508ca8-b63b-4d3f-a5a6-9508c902339f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c7sdv" [ee508ca8-b63b-4d3f-a5a6-9508c902339f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004294607s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-211312 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-211312 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.838131724s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-211312 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m1.831128346s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-211312 "pgrep -a kubelet"
I0908 11:26:49.263886  264164 config.go:182] Loaded profile config "enable-default-cni-211312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-211312 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bs4xm" [841ab597-e782-4b73-9dce-95f76e52700b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bs4xm" [841ab597-e782-4b73-9dce-95f76e52700b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004112918s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-211312 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (57.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-908105 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-908105 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (57.671996978s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (57.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-175678 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-175678 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m9.751403743s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-752bc" [b3b82424-4700-44d5-acb7-5fb0865e758e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003808429s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-211312 "pgrep -a kubelet"
I0908 11:27:24.965598  264164 config.go:182] Loaded profile config "flannel-211312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-211312 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mmbst" [4b00c57e-7f10-4fff-90df-aff595830153] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mmbst" [4b00c57e-7f10-4fff-90df-aff595830153] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004516676s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-211312 "pgrep -a kubelet"
I0908 11:27:31.017407  264164 config.go:182] Loaded profile config "bridge-211312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-211312 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j22j7" [5bfe5be4-21f1-4cd5-ac2f-96db588d0860] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j22j7" [5bfe5be4-21f1-4cd5-ac2f-96db588d0860] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004191187s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-211312 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-211312 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-211312 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-484572 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-484572 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m15.846863585s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-658830 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-658830 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (33.379564161s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-908105 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [070de53c-9229-454b-b392-72a9050edffd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [070de53c-9229-454b-b392-72a9050edffd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004890163s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-908105 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-908105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-908105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.027786622s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-908105 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-908105 --alsologtostderr -v=3
E0908 11:28:25.346340  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-908105 --alsologtostderr -v=3: (12.193200643s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-175678 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [002250fc-284d-41da-9735-fefcd5855600] Pending
helpers_test.go:352: "busybox" [002250fc-284d-41da-9735-fefcd5855600] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [002250fc-284d-41da-9735-fefcd5855600] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004591408s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-175678 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908105 -n old-k8s-version-908105
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908105 -n old-k8s-version-908105: exit status 7 (80.205267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-908105 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-908105 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-908105 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.290899935s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-908105 -n old-k8s-version-908105
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-175678 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-175678 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-658830 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-175678 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-175678 --alsologtostderr -v=3: (12.08515846s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-658830 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-658830 --alsologtostderr -v=3: (1.198917914s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-658830 -n newest-cni-658830
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-658830 -n newest-cni-658830: exit status 7 (73.343321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-658830 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-658830 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-658830 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (15.593651113s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-658830 -n newest-cni-658830
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-175678 -n no-preload-175678
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-175678 -n no-preload-175678: exit status 7 (122.169197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-175678 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-175678 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 11:28:51.854083  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:28:51.860616  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:28:51.872148  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:28:51.893677  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:28:51.935215  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:28:52.016779  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:28:52.178364  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:28:52.500185  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:28:53.142611  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:28:54.424680  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-175678 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (54.007432443s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-175678 -n no-preload-175678
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-658830 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-658830 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-658830 -n newest-cni-658830
E0908 11:28:56.986439  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-658830 -n newest-cni-658830: exit status 2 (324.643437ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-658830 -n newest-cni-658830
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-658830 -n newest-cni-658830: exit status 2 (344.176132ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-658830 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-658830 -n newest-cni-658830
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-658830 -n newest-cni-658830
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-190919 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 11:29:02.107833  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:29:12.349333  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-190919 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m18.913306639s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-484572 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d8d53071-795d-442f-9c54-5291e1e18059] Pending
helpers_test.go:352: "busybox" [d8d53071-795d-442f-9c54-5291e1e18059] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d8d53071-795d-442f-9c54-5291e1e18059] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004138411s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-484572 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-484572 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-484572 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-484572 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-484572 --alsologtostderr -v=3: (12.051034896s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rhg42" [b7d75a74-429b-4905-8765-4be0a34f43e5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003309961s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rhg42" [b7d75a74-429b-4905-8765-4be0a34f43e5] Running
E0908 11:29:32.831401  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004689441s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-908105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484572 -n default-k8s-diff-port-484572
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484572 -n default-k8s-diff-port-484572: exit status 7 (78.964974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-484572 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-484572 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-484572 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (47.099051879s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484572 -n default-k8s-diff-port-484572
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-908105 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-908105 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-908105 -n old-k8s-version-908105
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-908105 -n old-k8s-version-908105: exit status 2 (300.133118ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-908105 -n old-k8s-version-908105
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-908105 -n old-k8s-version-908105: exit status 2 (302.06774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-908105 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-908105 -n old-k8s-version-908105
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-908105 -n old-k8s-version-908105
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rkgkb" [4e65cf91-3922-45ea-9857-8b16a0ca1a73] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004121288s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rkgkb" [4e65cf91-3922-45ea-9857-8b16a0ca1a73] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003466006s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-175678 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-175678 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-175678 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-175678 -n no-preload-175678
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-175678 -n no-preload-175678: exit status 2 (342.900343ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-175678 -n no-preload-175678
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-175678 -n no-preload-175678: exit status 2 (343.14989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-175678 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-175678 -n no-preload-175678
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-175678 -n no-preload-175678
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-190919 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fad12fda-84c1-4d8e-b3f6-b97a28e2d153] Pending
helpers_test.go:352: "busybox" [fad12fda-84c1-4d8e-b3f6-b97a28e2d153] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fad12fda-84c1-4d8e-b3f6-b97a28e2d153] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003942895s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-190919 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-txrxv" [672cb625-8c9a-4c24-b8e9-ac3f3d721c8b] Running
E0908 11:30:26.042371  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/kindnet-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004123648s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-190919 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-190919 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-txrxv" [672cb625-8c9a-4c24-b8e9-ac3f3d721c8b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004274055s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-484572 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-190919 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-190919 --alsologtostderr -v=3: (11.933695223s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-484572 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-484572 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-484572 -n default-k8s-diff-port-484572
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-484572 -n default-k8s-diff-port-484572: exit status 2 (298.292602ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-484572 -n default-k8s-diff-port-484572
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-484572 -n default-k8s-diff-port-484572: exit status 2 (303.677233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-484572 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-484572 -n default-k8s-diff-port-484572
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-484572 -n default-k8s-diff-port-484572
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190919 -n embed-certs-190919
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190919 -n embed-certs-190919: exit status 7 (75.506088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-190919 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-190919 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 11:30:46.524329  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/kindnet-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.074065  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.080552  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.092328  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.113732  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.155222  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.236743  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.398038  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.595375  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.602517  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.614020  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.635520  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.677023  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.719782  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.759203  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:50.921019  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:51.243048  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:51.361685  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:51.885048  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:52.643254  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:53.167258  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:55.205373  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:30:55.729157  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:31:00.326773  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:31:00.850520  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:31:10.568777  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:31:11.092893  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:31:27.486169  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/kindnet-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:31:28.416393  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/functional-548498/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-190919 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (48.09430506s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190919 -n embed-certs-190919
E0908 11:31:31.051067  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/calico-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wgsd2" [83c55ca5-607e-4c4c-8b59-00a520404b16] Running
E0908 11:31:31.574549  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/custom-flannel-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:31:35.715100  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/auto-211312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003599274s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wgsd2" [83c55ca5-607e-4c4c-8b59-00a520404b16] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00357446s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-190919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-190919 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-190919 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-190919 -n embed-certs-190919
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-190919 -n embed-certs-190919: exit status 2 (303.582971ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-190919 -n embed-certs-190919
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-190919 -n embed-certs-190919: exit status 2 (297.608563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-190919 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-190919 -n embed-certs-190919
E0908 11:31:44.805026  264164 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21503-260352/.minikube/profiles/addons-310880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-190919 -n embed-certs-190919
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.78s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-310880 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-211312 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-211312" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-211312" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-211312

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211312"

                                                
                                                
----------------------- debugLogs end: kubenet-211312 [took: 4.391859508s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-211312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-211312
--- SKIP: TestNetworkPlugins/group/kubenet (4.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-211312 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-211312" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-211312

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-211312" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211312"

                                                
                                                
----------------------- debugLogs end: cilium-211312 [took: 5.180740198s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-211312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-211312
--- SKIP: TestNetworkPlugins/group/cilium (5.39s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-935157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-935157
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard