Test Report: Docker_Linux_crio 21504

                    
                      3892f90e7d746f1b37c491f3707229f264f0f5da:2025-09-08:41335
                    
                

Test fail (7/332)

x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-739733 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-739733 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f31226cb-73cd-493e-a591-81e47f9dcd4a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f31226cb-73cd-493e-a591-81e47f9dcd4a] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003085149s
addons_test.go:694: (dbg) Run:  kubectl --context addons-739733 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:694: (dbg) Non-zero exit: kubectl --context addons-739733 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS": exit status 1 (130.348978ms)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
addons_test.go:696: printenv creds: exit status 1
--- FAIL: TestAddons/serial/GCPAuth/FakeCredentials (11.34s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (155.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-739733 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-739733 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-739733 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [59a6f75f-e881-44c0-a066-3f0fe0e24ce9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [59a6f75f-e881-44c0-a066-3f0fe0e24ce9] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.05847104s
I0908 16:40:41.965983   11141 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-739733 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.312605612s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-739733 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-739733
helpers_test.go:243: (dbg) docker inspect addons-739733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40d0ff34d84ef14715ac2dfcaa317a06a4646b0400347e9ced9b9082a13505e3",
	        "Created": "2025-09-08T16:37:24.100498218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T16:37:24.137694731Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/40d0ff34d84ef14715ac2dfcaa317a06a4646b0400347e9ced9b9082a13505e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40d0ff34d84ef14715ac2dfcaa317a06a4646b0400347e9ced9b9082a13505e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/40d0ff34d84ef14715ac2dfcaa317a06a4646b0400347e9ced9b9082a13505e3/hosts",
	        "LogPath": "/var/lib/docker/containers/40d0ff34d84ef14715ac2dfcaa317a06a4646b0400347e9ced9b9082a13505e3/40d0ff34d84ef14715ac2dfcaa317a06a4646b0400347e9ced9b9082a13505e3-json.log",
	        "Name": "/addons-739733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-739733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-739733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40d0ff34d84ef14715ac2dfcaa317a06a4646b0400347e9ced9b9082a13505e3",
	                "LowerDir": "/var/lib/docker/overlay2/db5839877b1aff7850e907faedcc1003395c678b8ff55fd7492463d17b462051-init/diff:/var/lib/docker/overlay2/e8e8fc7fb28a55bf413358d36a5c2b32c680c35a010c40a038aea7770a9d1ab7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db5839877b1aff7850e907faedcc1003395c678b8ff55fd7492463d17b462051/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db5839877b1aff7850e907faedcc1003395c678b8ff55fd7492463d17b462051/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db5839877b1aff7850e907faedcc1003395c678b8ff55fd7492463d17b462051/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-739733",
	                "Source": "/var/lib/docker/volumes/addons-739733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-739733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-739733",
	                "name.minikube.sigs.k8s.io": "addons-739733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e4fd7e20eac6739985fbf28f6e1423871d703cd1e3877ffa410df6d0758fff76",
	            "SandboxKey": "/var/run/docker/netns/e4fd7e20eac6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-739733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:ef:57:51:8b:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9b77e488085404089a7784b06f67f1ebdbdbb7d4b70e492a119e6536c2010d9b",
	                    "EndpointID": "d13cc60d9f78aa01135a00a3cc1301497a9edbeca0287685767c6fe8fe5f5ed2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-739733",
	                        "40d0ff34d84e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-739733 -n addons-739733
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-739733 logs -n 25: (1.17484098s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-616006 --alsologtostderr --binary-mirror http://127.0.0.1:43461 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-616006 │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │                     │
	│ delete  │ -p binary-mirror-616006                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-616006 │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │ 08 Sep 25 16:37 UTC │
	│ addons  │ disable dashboard -p addons-739733                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │                     │
	│ addons  │ enable dashboard -p addons-739733                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │                     │
	│ start   │ -p addons-739733 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-739733 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-739733 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ enable headlamp -p addons-739733 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-739733 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-739733 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-739733 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-739733                                                                                                                                                                                                                                                                                                                                                                                           │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-739733 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-739733 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ ip      │ addons-739733 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-739733 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ ssh     │ addons-739733 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │                     │
	│ addons  │ addons-739733 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-739733 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-739733 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:41 UTC │
	│ ssh     │ addons-739733 ssh cat /opt/local-path-provisioner/pvc-179b132b-c58d-4324-a2e4-e9d22ba4b122_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-739733 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ addons-739733 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ addons-739733 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ ip      │ addons-739733 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-739733        │ jenkins │ v1.36.0 │ 08 Sep 25 16:42 UTC │ 08 Sep 25 16:42 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 16:37:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 16:37:00.800849   12465 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:37:00.800950   12465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:37:00.800955   12465 out.go:374] Setting ErrFile to fd 2...
	I0908 16:37:00.800958   12465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:37:00.801173   12465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
	I0908 16:37:00.801905   12465 out.go:368] Setting JSON to false
	I0908 16:37:00.802715   12465 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1165,"bootTime":1757348256,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 16:37:00.802808   12465 start.go:140] virtualization: kvm guest
	I0908 16:37:00.805210   12465 out.go:179] * [addons-739733] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 16:37:00.806789   12465 notify.go:220] Checking for updates...
	I0908 16:37:00.806824   12465 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 16:37:00.808328   12465 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 16:37:00.809947   12465 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	I0908 16:37:00.811330   12465 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	I0908 16:37:00.812891   12465 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 16:37:00.814408   12465 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 16:37:00.816008   12465 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 16:37:00.837892   12465 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 16:37:00.837988   12465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 16:37:00.884666   12465 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:44 SystemTime:2025-09-08 16:37:00.875835575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 16:37:00.884816   12465 docker.go:318] overlay module found
	I0908 16:37:00.886978   12465 out.go:179] * Using the docker driver based on user configuration
	I0908 16:37:00.888713   12465 start.go:304] selected driver: docker
	I0908 16:37:00.888734   12465 start.go:918] validating driver "docker" against <nil>
	I0908 16:37:00.888746   12465 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 16:37:00.889623   12465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 16:37:00.936308   12465 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:44 SystemTime:2025-09-08 16:37:00.927434531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 16:37:00.936526   12465 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 16:37:00.936779   12465 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 16:37:00.938763   12465 out.go:179] * Using Docker driver with root privileges
	I0908 16:37:00.940399   12465 cni.go:84] Creating CNI manager for ""
	I0908 16:37:00.940477   12465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 16:37:00.940494   12465 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 16:37:00.940584   12465 start.go:348] cluster config:
	{Name:addons-739733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-739733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0908 16:37:00.942405   12465 out.go:179] * Starting "addons-739733" primary control-plane node in "addons-739733" cluster
	I0908 16:37:00.943816   12465 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 16:37:00.945495   12465 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 16:37:00.947041   12465 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 16:37:00.947087   12465 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-7450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 16:37:00.947095   12465 cache.go:58] Caching tarball of preloaded images
	I0908 16:37:00.947141   12465 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 16:37:00.947189   12465 preload.go:172] Found /home/jenkins/minikube-integration/21504-7450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 16:37:00.947201   12465 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 16:37:00.947546   12465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/config.json ...
	I0908 16:37:00.947571   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/config.json: {Name:mk65fa95fab40c9857d094120c1344ae49524777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:00.963763   12465 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 16:37:00.963874   12465 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 16:37:00.963889   12465 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 16:37:00.963894   12465 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 16:37:00.963900   12465 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 16:37:00.963907   12465 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from local cache
	I0908 16:37:13.714016   12465 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from cached tarball
	I0908 16:37:13.714070   12465 cache.go:232] Successfully downloaded all kic artifacts
	I0908 16:37:13.714107   12465 start.go:360] acquireMachinesLock for addons-739733: {Name:mk354298c5f49b12b40f01e4b4934dbd904d3266 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 16:37:13.714221   12465 start.go:364] duration metric: took 91.427µs to acquireMachinesLock for "addons-739733"
	I0908 16:37:13.714253   12465 start.go:93] Provisioning new machine with config: &{Name:addons-739733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-739733 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 16:37:13.714359   12465 start.go:125] createHost starting for "" (driver="docker")
	I0908 16:37:13.716499   12465 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0908 16:37:13.716716   12465 start.go:159] libmachine.API.Create for "addons-739733" (driver="docker")
	I0908 16:37:13.716745   12465 client.go:168] LocalClient.Create starting
	I0908 16:37:13.716858   12465 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21504-7450/.minikube/certs/ca.pem
	I0908 16:37:13.963709   12465 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21504-7450/.minikube/certs/cert.pem
	I0908 16:37:14.280828   12465 cli_runner.go:164] Run: docker network inspect addons-739733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 16:37:14.297153   12465 cli_runner.go:211] docker network inspect addons-739733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 16:37:14.297211   12465 network_create.go:284] running [docker network inspect addons-739733] to gather additional debugging logs...
	I0908 16:37:14.297234   12465 cli_runner.go:164] Run: docker network inspect addons-739733
	W0908 16:37:14.313526   12465 cli_runner.go:211] docker network inspect addons-739733 returned with exit code 1
	I0908 16:37:14.313554   12465 network_create.go:287] error running [docker network inspect addons-739733]: docker network inspect addons-739733: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-739733 not found
	I0908 16:37:14.313570   12465 network_create.go:289] output of [docker network inspect addons-739733]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-739733 not found
	
	** /stderr **
	I0908 16:37:14.313675   12465 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 16:37:14.331388   12465 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0cfb0}
	I0908 16:37:14.331419   12465 network_create.go:124] attempt to create docker network addons-739733 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0908 16:37:14.331461   12465 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-739733 addons-739733
	I0908 16:37:14.383193   12465 network_create.go:108] docker network addons-739733 192.168.49.0/24 created
	I0908 16:37:14.383221   12465 kic.go:121] calculated static IP "192.168.49.2" for the "addons-739733" container
	I0908 16:37:14.383274   12465 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 16:37:14.399676   12465 cli_runner.go:164] Run: docker volume create addons-739733 --label name.minikube.sigs.k8s.io=addons-739733 --label created_by.minikube.sigs.k8s.io=true
	I0908 16:37:14.416634   12465 oci.go:103] Successfully created a docker volume addons-739733
	I0908 16:37:14.416715   12465 cli_runner.go:164] Run: docker run --rm --name addons-739733-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-739733 --entrypoint /usr/bin/test -v addons-739733:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 16:37:19.539716   12465 cli_runner.go:217] Completed: docker run --rm --name addons-739733-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-739733 --entrypoint /usr/bin/test -v addons-739733:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib: (5.122941509s)
	I0908 16:37:19.539756   12465 oci.go:107] Successfully prepared a docker volume addons-739733
	I0908 16:37:19.539801   12465 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 16:37:19.539826   12465 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 16:37:19.539899   12465 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21504-7450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-739733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 16:37:24.035180   12465 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21504-7450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-739733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.495237786s)
	I0908 16:37:24.035218   12465 kic.go:203] duration metric: took 4.495391009s to extract preloaded images to volume ...
	W0908 16:37:24.035361   12465 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 16:37:24.035477   12465 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 16:37:24.085080   12465 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-739733 --name addons-739733 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-739733 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-739733 --network addons-739733 --ip 192.168.49.2 --volume addons-739733:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 16:37:24.383125   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Running}}
	I0908 16:37:24.401816   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:24.420748   12465 cli_runner.go:164] Run: docker exec addons-739733 stat /var/lib/dpkg/alternatives/iptables
	I0908 16:37:24.466737   12465 oci.go:144] the created container "addons-739733" has a running status.
	I0908 16:37:24.466775   12465 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa...
	I0908 16:37:24.596181   12465 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 16:37:24.617044   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:24.636780   12465 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 16:37:24.636816   12465 kic_runner.go:114] Args: [docker exec --privileged addons-739733 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 16:37:24.679271   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:24.700374   12465 machine.go:93] provisionDockerMachine start ...
	I0908 16:37:24.700477   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:24.726420   12465 main.go:141] libmachine: Using SSH client type: native
	I0908 16:37:24.726676   12465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0908 16:37:24.726690   12465 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 16:37:24.727329   12465 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57558->127.0.0.1:32768: read: connection reset by peer
	I0908 16:37:27.845325   12465 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-739733
	
	I0908 16:37:27.845352   12465 ubuntu.go:182] provisioning hostname "addons-739733"
	I0908 16:37:27.845427   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:27.863736   12465 main.go:141] libmachine: Using SSH client type: native
	I0908 16:37:27.863964   12465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0908 16:37:27.863978   12465 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-739733 && echo "addons-739733" | sudo tee /etc/hostname
	I0908 16:37:27.992727   12465 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-739733
	
	I0908 16:37:27.992922   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:28.011219   12465 main.go:141] libmachine: Using SSH client type: native
	I0908 16:37:28.011477   12465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0908 16:37:28.011495   12465 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-739733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-739733/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-739733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 16:37:28.129781   12465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 16:37:28.129816   12465 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21504-7450/.minikube CaCertPath:/home/jenkins/minikube-integration/21504-7450/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21504-7450/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21504-7450/.minikube}
	I0908 16:37:28.129868   12465 ubuntu.go:190] setting up certificates
	I0908 16:37:28.129885   12465 provision.go:84] configureAuth start
	I0908 16:37:28.129944   12465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-739733
	I0908 16:37:28.147776   12465 provision.go:143] copyHostCerts
	I0908 16:37:28.147869   12465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7450/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21504-7450/.minikube/ca.pem (1078 bytes)
	I0908 16:37:28.147984   12465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7450/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21504-7450/.minikube/cert.pem (1123 bytes)
	I0908 16:37:28.148104   12465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7450/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21504-7450/.minikube/key.pem (1675 bytes)
	I0908 16:37:28.148181   12465 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21504-7450/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21504-7450/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21504-7450/.minikube/certs/ca-key.pem org=jenkins.addons-739733 san=[127.0.0.1 192.168.49.2 addons-739733 localhost minikube]
	I0908 16:37:28.297578   12465 provision.go:177] copyRemoteCerts
	I0908 16:37:28.297676   12465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 16:37:28.297722   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:28.314967   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:28.402233   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 16:37:28.424097   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 16:37:28.446508   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 16:37:28.469210   12465 provision.go:87] duration metric: took 339.312283ms to configureAuth
	I0908 16:37:28.469235   12465 ubuntu.go:206] setting minikube options for container-runtime
	I0908 16:37:28.469378   12465 config.go:182] Loaded profile config "addons-739733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:37:28.469469   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:28.486814   12465 main.go:141] libmachine: Using SSH client type: native
	I0908 16:37:28.487017   12465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0908 16:37:28.487033   12465 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 16:37:28.697061   12465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 16:37:28.697086   12465 machine.go:96] duration metric: took 3.996683764s to provisionDockerMachine
	I0908 16:37:28.697096   12465 client.go:171] duration metric: took 14.980346084s to LocalClient.Create
	I0908 16:37:28.697113   12465 start.go:167] duration metric: took 14.980397955s to libmachine.API.Create "addons-739733"
	I0908 16:37:28.697122   12465 start.go:293] postStartSetup for "addons-739733" (driver="docker")
	I0908 16:37:28.697134   12465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 16:37:28.697193   12465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 16:37:28.697240   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:28.715370   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:28.802716   12465 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 16:37:28.805909   12465 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 16:37:28.805935   12465 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 16:37:28.805948   12465 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 16:37:28.805954   12465 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 16:37:28.805966   12465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-7450/.minikube/addons for local assets ...
	I0908 16:37:28.806024   12465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-7450/.minikube/files for local assets ...
	I0908 16:37:28.806055   12465 start.go:296] duration metric: took 108.927032ms for postStartSetup
	I0908 16:37:28.806329   12465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-739733
	I0908 16:37:28.824655   12465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/config.json ...
	I0908 16:37:28.824902   12465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 16:37:28.824943   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:28.843585   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:28.926579   12465 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 16:37:28.930854   12465 start.go:128] duration metric: took 15.216479359s to createHost
	I0908 16:37:28.930884   12465 start.go:83] releasing machines lock for "addons-739733", held for 15.216649431s
	I0908 16:37:28.930954   12465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-739733
	I0908 16:37:28.947950   12465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 16:37:28.947999   12465 ssh_runner.go:195] Run: cat /version.json
	I0908 16:37:28.948017   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:28.948044   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:28.965528   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:28.966489   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:29.128910   12465 ssh_runner.go:195] Run: systemctl --version
	I0908 16:37:29.132931   12465 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 16:37:29.270962   12465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 16:37:29.275148   12465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 16:37:29.292484   12465 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 16:37:29.292561   12465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 16:37:29.320132   12465 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 16:37:29.320155   12465 start.go:495] detecting cgroup driver to use...
	I0908 16:37:29.320190   12465 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 16:37:29.320248   12465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 16:37:29.334343   12465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 16:37:29.344621   12465 docker.go:218] disabling cri-docker service (if available) ...
	I0908 16:37:29.344678   12465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 16:37:29.357419   12465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 16:37:29.370720   12465 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 16:37:29.443234   12465 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 16:37:29.526803   12465 docker.go:234] disabling docker service ...
	I0908 16:37:29.526862   12465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 16:37:29.545416   12465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 16:37:29.556749   12465 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 16:37:29.631086   12465 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 16:37:29.715616   12465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 16:37:29.726448   12465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 16:37:29.741312   12465 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 16:37:29.741469   12465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:29.750789   12465 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 16:37:29.750851   12465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:29.760298   12465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:29.769358   12465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:29.778548   12465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 16:37:29.787178   12465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:29.796406   12465 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:29.811344   12465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:29.820877   12465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 16:37:29.828689   12465 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 16:37:29.828745   12465 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 16:37:29.842730   12465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 16:37:29.851458   12465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 16:37:29.930543   12465 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 16:37:30.019988   12465 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 16:37:30.020066   12465 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 16:37:30.023416   12465 start.go:563] Will wait 60s for crictl version
	I0908 16:37:30.023469   12465 ssh_runner.go:195] Run: which crictl
	I0908 16:37:30.026468   12465 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 16:37:30.059184   12465 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 16:37:30.059288   12465 ssh_runner.go:195] Run: crio --version
	I0908 16:37:30.093930   12465 ssh_runner.go:195] Run: crio --version
	I0908 16:37:30.130674   12465 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 16:37:30.132057   12465 cli_runner.go:164] Run: docker network inspect addons-739733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 16:37:30.148648   12465 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 16:37:30.152547   12465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 16:37:30.163398   12465 kubeadm.go:875] updating cluster {Name:addons-739733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-739733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 16:37:30.163516   12465 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 16:37:30.163557   12465 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 16:37:30.229417   12465 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 16:37:30.229438   12465 crio.go:433] Images already preloaded, skipping extraction
	I0908 16:37:30.229477   12465 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 16:37:30.261607   12465 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 16:37:30.261629   12465 cache_images.go:85] Images are preloaded, skipping loading
	I0908 16:37:30.261637   12465 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0908 16:37:30.261746   12465 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-739733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-739733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 16:37:30.261807   12465 ssh_runner.go:195] Run: crio config
	I0908 16:37:30.303265   12465 cni.go:84] Creating CNI manager for ""
	I0908 16:37:30.303289   12465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 16:37:30.303299   12465 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 16:37:30.303319   12465 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-739733 NodeName:addons-739733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 16:37:30.303423   12465 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-739733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 16:37:30.303476   12465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 16:37:30.312063   12465 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 16:37:30.312134   12465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 16:37:30.320712   12465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0908 16:37:30.337857   12465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 16:37:30.354492   12465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 16:37:30.371498   12465 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 16:37:30.374949   12465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 16:37:30.385472   12465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 16:37:30.462816   12465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 16:37:30.475740   12465 certs.go:68] Setting up /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733 for IP: 192.168.49.2
	I0908 16:37:30.475765   12465 certs.go:194] generating shared ca certs ...
	I0908 16:37:30.475785   12465 certs.go:226] acquiring lock for ca certs: {Name:mk8ef4ba81c554c9252c23dcf2ec779e6a28039b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:30.475911   12465 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21504-7450/.minikube/ca.key
	I0908 16:37:30.913830   12465 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-7450/.minikube/ca.crt ...
	I0908 16:37:30.913861   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/.minikube/ca.crt: {Name:mk0b15901ac24e6f349acc0a127a1596c0f6a3d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:30.914058   12465 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-7450/.minikube/ca.key ...
	I0908 16:37:30.914075   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/.minikube/ca.key: {Name:mk145c1e88ae16ff87eb873dbf78ece6d45b7cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:30.914179   12465 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21504-7450/.minikube/proxy-client-ca.key
	I0908 16:37:30.992316   12465 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-7450/.minikube/proxy-client-ca.crt ...
	I0908 16:37:30.992349   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/.minikube/proxy-client-ca.crt: {Name:mk11c7e92dd426a2e401fb326b3883b4a63b6542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:30.992536   12465 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-7450/.minikube/proxy-client-ca.key ...
	I0908 16:37:30.992553   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/.minikube/proxy-client-ca.key: {Name:mk3753d59d8cf1b2efc3eed0467c1f2851da3bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:30.992651   12465 certs.go:256] generating profile certs ...
	I0908 16:37:30.992712   12465 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.key
	I0908 16:37:30.992729   12465 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt with IP's: []
	I0908 16:37:31.092131   12465 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt ...
	I0908 16:37:31.092162   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: {Name:mk5bce6a9497561ca8a61396ccad6c5eef9eee18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:31.092349   12465 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.key ...
	I0908 16:37:31.092363   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.key: {Name:mk7d5bad0ff4c525476f76e96accf6143909de12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:31.092471   12465 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.key.c3d2498f
	I0908 16:37:31.092496   12465 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.crt.c3d2498f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0908 16:37:31.398597   12465 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.crt.c3d2498f ...
	I0908 16:37:31.398637   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.crt.c3d2498f: {Name:mkfd98ad580140a431cb03033dc6657697627f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:31.398811   12465 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.key.c3d2498f ...
	I0908 16:37:31.398827   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.key.c3d2498f: {Name:mk4623429e0151b69a3db4b3f507accde720ee7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:31.398929   12465 certs.go:381] copying /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.crt.c3d2498f -> /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.crt
	I0908 16:37:31.399006   12465 certs.go:385] copying /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.key.c3d2498f -> /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.key
	I0908 16:37:31.399053   12465 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/proxy-client.key
	I0908 16:37:31.399069   12465 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/proxy-client.crt with IP's: []
	I0908 16:37:31.684912   12465 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/proxy-client.crt ...
	I0908 16:37:31.684944   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/proxy-client.crt: {Name:mkf221bf9b8b86e473e1992f63f1417b345dce66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:31.685125   12465 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/proxy-client.key ...
	I0908 16:37:31.685140   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/proxy-client.key: {Name:mk00c059d8d74e15e2d21047caf3b5f5948ee6e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:31.685348   12465 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7450/.minikube/certs/ca-key.pem (1679 bytes)
	I0908 16:37:31.685402   12465 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7450/.minikube/certs/ca.pem (1078 bytes)
	I0908 16:37:31.685434   12465 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7450/.minikube/certs/cert.pem (1123 bytes)
	I0908 16:37:31.685465   12465 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7450/.minikube/certs/key.pem (1675 bytes)
	I0908 16:37:31.686027   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 16:37:31.708317   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 16:37:31.730545   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 16:37:31.751891   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 16:37:31.773758   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 16:37:31.795589   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 16:37:31.817815   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 16:37:31.839223   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 16:37:31.861310   12465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7450/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 16:37:31.882889   12465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 16:37:31.898748   12465 ssh_runner.go:195] Run: openssl version
	I0908 16:37:31.903739   12465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 16:37:31.912312   12465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 16:37:31.915630   12465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 16:37:31.915684   12465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 16:37:31.922000   12465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 16:37:31.930434   12465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 16:37:31.933595   12465 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 16:37:31.933641   12465 kubeadm.go:392] StartCluster: {Name:addons-739733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-739733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:37:31.933744   12465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 16:37:31.933795   12465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 16:37:31.966603   12465 cri.go:89] found id: ""
	I0908 16:37:31.966664   12465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 16:37:31.974910   12465 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 16:37:31.983052   12465 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 16:37:31.983101   12465 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 16:37:31.990675   12465 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 16:37:31.990698   12465 kubeadm.go:157] found existing configuration files:
	
	I0908 16:37:31.990736   12465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 16:37:31.998175   12465 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 16:37:31.998231   12465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 16:37:32.006012   12465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 16:37:32.014014   12465 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 16:37:32.014058   12465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 16:37:32.022230   12465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 16:37:32.029873   12465 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 16:37:32.029925   12465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 16:37:32.037489   12465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 16:37:32.045324   12465 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 16:37:32.045371   12465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 16:37:32.053064   12465 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 16:37:32.088055   12465 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 16:37:32.088109   12465 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 16:37:32.105221   12465 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 16:37:32.105364   12465 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0908 16:37:32.105452   12465 kubeadm.go:310] OS: Linux
	I0908 16:37:32.105520   12465 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 16:37:32.105602   12465 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 16:37:32.105683   12465 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 16:37:32.105750   12465 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 16:37:32.105856   12465 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 16:37:32.105938   12465 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 16:37:32.106004   12465 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 16:37:32.106066   12465 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 16:37:32.106127   12465 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 16:37:32.163932   12465 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 16:37:32.164062   12465 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 16:37:32.164184   12465 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 16:37:32.170537   12465 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 16:37:32.174072   12465 out.go:252]   - Generating certificates and keys ...
	I0908 16:37:32.174174   12465 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 16:37:32.174258   12465 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 16:37:32.359510   12465 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 16:37:32.451320   12465 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 16:37:32.758596   12465 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 16:37:32.850850   12465 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 16:37:33.029934   12465 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 16:37:33.030110   12465 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-739733 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 16:37:33.168375   12465 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 16:37:33.168498   12465 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-739733 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 16:37:33.366264   12465 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 16:37:33.469130   12465 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 16:37:33.652544   12465 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 16:37:33.652640   12465 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 16:37:33.794940   12465 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 16:37:34.144991   12465 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 16:37:34.522923   12465 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 16:37:34.812477   12465 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 16:37:35.260898   12465 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 16:37:35.261366   12465 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 16:37:35.263511   12465 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 16:37:35.265685   12465 out.go:252]   - Booting up control plane ...
	I0908 16:37:35.265810   12465 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 16:37:35.265904   12465 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 16:37:35.265986   12465 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 16:37:35.274695   12465 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 16:37:35.274800   12465 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 16:37:35.280641   12465 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 16:37:35.280785   12465 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 16:37:35.280832   12465 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 16:37:35.356577   12465 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 16:37:35.356744   12465 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 16:37:35.858257   12465 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.817508ms
	I0908 16:37:35.861801   12465 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 16:37:35.861943   12465 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0908 16:37:35.862075   12465 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 16:37:35.862214   12465 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 16:37:37.673083   12465 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.811226403s
	I0908 16:37:39.263453   12465 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.40164384s
	I0908 16:37:40.862891   12465 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001116341s
	I0908 16:37:40.874275   12465 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 16:37:40.886086   12465 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 16:37:40.894725   12465 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 16:37:40.894987   12465 kubeadm.go:310] [mark-control-plane] Marking the node addons-739733 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 16:37:40.903186   12465 kubeadm.go:310] [bootstrap-token] Using token: b1s91z.31elst3jy4bh6pm2
	I0908 16:37:40.904544   12465 out.go:252]   - Configuring RBAC rules ...
	I0908 16:37:40.904642   12465 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 16:37:40.909919   12465 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 16:37:40.915406   12465 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 16:37:40.918190   12465 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 16:37:40.920678   12465 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 16:37:40.924103   12465 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 16:37:41.269542   12465 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 16:37:41.685932   12465 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 16:37:42.269307   12465 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 16:37:42.270276   12465 kubeadm.go:310] 
	I0908 16:37:42.270374   12465 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 16:37:42.270384   12465 kubeadm.go:310] 
	I0908 16:37:42.270480   12465 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 16:37:42.270488   12465 kubeadm.go:310] 
	I0908 16:37:42.270527   12465 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 16:37:42.270622   12465 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 16:37:42.270707   12465 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 16:37:42.270726   12465 kubeadm.go:310] 
	I0908 16:37:42.270804   12465 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 16:37:42.270814   12465 kubeadm.go:310] 
	I0908 16:37:42.270885   12465 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 16:37:42.270913   12465 kubeadm.go:310] 
	I0908 16:37:42.270998   12465 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 16:37:42.271104   12465 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 16:37:42.271193   12465 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 16:37:42.271211   12465 kubeadm.go:310] 
	I0908 16:37:42.271335   12465 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 16:37:42.271446   12465 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 16:37:42.271456   12465 kubeadm.go:310] 
	I0908 16:37:42.271568   12465 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b1s91z.31elst3jy4bh6pm2 \
	I0908 16:37:42.271713   12465 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6de78bdb5098b8c1df3c7122cd00744a41e7d1c4f4174be72c9a271135e5b7e \
	I0908 16:37:42.271743   12465 kubeadm.go:310] 	--control-plane 
	I0908 16:37:42.271753   12465 kubeadm.go:310] 
	I0908 16:37:42.271894   12465 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 16:37:42.271903   12465 kubeadm.go:310] 
	I0908 16:37:42.272013   12465 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b1s91z.31elst3jy4bh6pm2 \
	I0908 16:37:42.272153   12465 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a6de78bdb5098b8c1df3c7122cd00744a41e7d1c4f4174be72c9a271135e5b7e 
	I0908 16:37:42.273356   12465 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 16:37:42.273542   12465 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0908 16:37:42.273668   12465 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 16:37:42.273706   12465 cni.go:84] Creating CNI manager for ""
	I0908 16:37:42.273723   12465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 16:37:42.276068   12465 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 16:37:42.277768   12465 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 16:37:42.281737   12465 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 16:37:42.281762   12465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 16:37:42.299121   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 16:37:42.499182   12465 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 16:37:42.499255   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:37:42.499290   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-739733 minikube.k8s.io/updated_at=2025_09_08T16_37_42_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6 minikube.k8s.io/name=addons-739733 minikube.k8s.io/primary=true
	I0908 16:37:42.506325   12465 ops.go:34] apiserver oom_adj: -16
	I0908 16:37:42.588399   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:37:43.089073   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:37:43.589189   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:37:44.088692   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:37:44.588474   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:37:45.088773   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:37:45.588804   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:37:46.089386   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:37:46.589284   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:37:47.088956   12465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:37:47.156258   12465 kubeadm.go:1105] duration metric: took 4.657057201s to wait for elevateKubeSystemPrivileges
	I0908 16:37:47.156292   12465 kubeadm.go:394] duration metric: took 15.22265533s to StartCluster
	I0908 16:37:47.156320   12465 settings.go:142] acquiring lock: {Name:mk8ffe3eb8fa823f0743e1f4cefbc9040648ebff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:47.156422   12465 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21504-7450/kubeconfig
	I0908 16:37:47.156940   12465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7450/kubeconfig: {Name:mk757a1482255e2f0fdca5d5ab60b645788bd16f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:47.157183   12465 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 16:37:47.157410   12465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 16:37:47.157416   12465 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 16:37:47.157539   12465 addons.go:69] Setting yakd=true in profile "addons-739733"
	I0908 16:37:47.157572   12465 addons.go:238] Setting addon yakd=true in "addons-739733"
	I0908 16:37:47.157606   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.158222   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.158267   12465 addons.go:69] Setting inspektor-gadget=true in profile "addons-739733"
	I0908 16:37:47.158302   12465 addons.go:238] Setting addon inspektor-gadget=true in "addons-739733"
	I0908 16:37:47.158358   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.158460   12465 addons.go:69] Setting registry-creds=true in profile "addons-739733"
	I0908 16:37:47.158480   12465 addons.go:238] Setting addon registry-creds=true in "addons-739733"
	I0908 16:37:47.158507   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.158517   12465 addons.go:69] Setting default-storageclass=true in profile "addons-739733"
	I0908 16:37:47.158622   12465 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-739733"
	I0908 16:37:47.158681   12465 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-739733"
	I0908 16:37:47.158772   12465 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-739733"
	I0908 16:37:47.158934   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.158943   12465 addons.go:69] Setting metrics-server=true in profile "addons-739733"
	I0908 16:37:47.158957   12465 addons.go:238] Setting addon metrics-server=true in "addons-739733"
	I0908 16:37:47.158976   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.159241   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.159437   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.159493   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.159841   12465 addons.go:69] Setting volcano=true in profile "addons-739733"
	I0908 16:37:47.159904   12465 addons.go:238] Setting addon volcano=true in "addons-739733"
	I0908 16:37:47.159923   12465 addons.go:69] Setting ingress=true in profile "addons-739733"
	I0908 16:37:47.159957   12465 addons.go:238] Setting addon ingress=true in "addons-739733"
	I0908 16:37:47.159977   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.159999   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.160347   12465 addons.go:69] Setting storage-provisioner=true in profile "addons-739733"
	I0908 16:37:47.160469   12465 addons.go:238] Setting addon storage-provisioner=true in "addons-739733"
	I0908 16:37:47.160515   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.160621   12465 addons.go:69] Setting cloud-spanner=true in profile "addons-739733"
	I0908 16:37:47.160667   12465 addons.go:238] Setting addon cloud-spanner=true in "addons-739733"
	I0908 16:37:47.160702   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.160852   12465 out.go:179] * Verifying Kubernetes components...
	I0908 16:37:47.160974   12465 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-739733"
	I0908 16:37:47.160998   12465 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-739733"
	I0908 16:37:47.161037   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.161178   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.158937   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.161683   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.161779   12465 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-739733"
	I0908 16:37:47.159906   12465 config.go:182] Loaded profile config "addons-739733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:37:47.160537   12465 addons.go:69] Setting gcp-auth=true in profile "addons-739733"
	I0908 16:37:47.161947   12465 mustload.go:65] Loading cluster: addons-739733
	I0908 16:37:47.161975   12465 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-739733"
	I0908 16:37:47.162004   12465 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-739733"
	I0908 16:37:47.162043   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.162079   12465 addons.go:69] Setting registry=true in profile "addons-739733"
	I0908 16:37:47.162107   12465 addons.go:238] Setting addon registry=true in "addons-739733"
	I0908 16:37:47.162138   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.162257   12465 config.go:182] Loaded profile config "addons-739733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:37:47.162515   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.162556   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.162620   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.162793   12465 addons.go:69] Setting ingress-dns=true in profile "addons-739733"
	I0908 16:37:47.160529   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.160542   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.160551   12465 addons.go:69] Setting volumesnapshots=true in profile "addons-739733"
	I0908 16:37:47.161852   12465 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-739733"
	I0908 16:37:47.163176   12465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 16:37:47.167510   12465 addons.go:238] Setting addon ingress-dns=true in "addons-739733"
	I0908 16:37:47.167815   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.167976   12465 addons.go:238] Setting addon volumesnapshots=true in "addons-739733"
	I0908 16:37:47.168026   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.168528   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.168600   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.169375   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.202118   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.206075   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.210740   12465 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 16:37:47.213813   12465 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 16:37:47.213838   12465 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 16:37:47.213918   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.217739   12465 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 16:37:47.218437   12465 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-739733"
	I0908 16:37:47.218491   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.219157   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.220822   12465 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 16:37:47.220842   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 16:37:47.220911   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.224636   12465 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0908 16:37:47.224713   12465 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 16:37:47.226946   12465 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 16:37:47.226970   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 16:37:47.227065   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.227347   12465 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 16:37:47.227364   12465 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 16:37:47.227439   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.241701   12465 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	W0908 16:37:47.242169   12465 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 16:37:47.245685   12465 addons.go:238] Setting addon default-storageclass=true in "addons-739733"
	I0908 16:37:47.245733   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.246067   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:47.246978   12465 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 16:37:47.247043   12465 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 16:37:47.247061   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 16:37:47.247120   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.248238   12465 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 16:37:47.248256   12465 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 16:37:47.248305   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.257174   12465 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 16:37:47.259030   12465 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 16:37:47.260442   12465 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 16:37:47.260464   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 16:37:47.260529   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.266721   12465 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 16:37:47.267993   12465 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 16:37:47.268015   12465 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 16:37:47.268095   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.282234   12465 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 16:37:47.284106   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:47.285856   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.287885   12465 out.go:179]   - Using image docker.io/busybox:stable
	I0908 16:37:47.289077   12465 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 16:37:47.289097   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 16:37:47.289153   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.301651   12465 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 16:37:47.303099   12465 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 16:37:47.304374   12465 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 16:37:47.305561   12465 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 16:37:47.305581   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 16:37:47.305640   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.305755   12465 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 16:37:47.307526   12465 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 16:37:47.309551   12465 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 16:37:47.309581   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 16:37:47.309639   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.311633   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.312553   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.312942   12465 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 16:37:47.314531   12465 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 16:37:47.315775   12465 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 16:37:47.316758   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.318496   12465 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 16:37:47.319495   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.323669   12465 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 16:37:47.323692   12465 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 16:37:47.323745   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.323779   12465 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 16:37:47.323782   12465 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 16:37:47.324806   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.325564   12465 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 16:37:47.325591   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 16:37:47.325650   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.326835   12465 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 16:37:47.328050   12465 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 16:37:47.329243   12465 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 16:37:47.329342   12465 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 16:37:47.329354   12465 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 16:37:47.329408   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.330557   12465 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 16:37:47.330577   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 16:37:47.330628   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:47.333013   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.337137   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.340865   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.342919   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.348299   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.355326   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.355519   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.357077   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:47.361705   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	W0908 16:37:47.367825   12465 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 16:37:47.367857   12465 retry.go:31] will retry after 207.292629ms: ssh: handshake failed: EOF
	W0908 16:37:47.373849   12465 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 16:37:47.373878   12465 retry.go:31] will retry after 314.73775ms: ssh: handshake failed: EOF
	I0908 16:37:47.563338   12465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 16:37:47.563407   12465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 16:37:47.666300   12465 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 16:37:47.666391   12465 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 16:37:47.671699   12465 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 16:37:47.671784   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 16:37:47.773988   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 16:37:47.778973   12465 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 16:37:47.779068   12465 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 16:37:47.783561   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 16:37:47.862690   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 16:37:47.866742   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 16:37:47.866997   12465 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 16:37:47.867053   12465 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 16:37:47.872403   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 16:37:47.877490   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 16:37:47.878499   12465 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 16:37:47.878568   12465 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 16:37:47.965284   12465 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:37:47.965313   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 16:37:47.965693   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 16:37:47.965971   12465 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 16:37:47.965989   12465 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 16:37:47.966097   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 16:37:47.984167   12465 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 16:37:47.984255   12465 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 16:37:47.984206   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 16:37:48.078865   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:37:48.163918   12465 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 16:37:48.164028   12465 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 16:37:48.164363   12465 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 16:37:48.164420   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 16:37:48.272324   12465 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 16:37:48.272411   12465 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 16:37:48.575314   12465 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 16:37:48.575414   12465 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 16:37:48.675186   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 16:37:48.769756   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 16:37:48.776108   12465 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 16:37:48.776188   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 16:37:48.976534   12465 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 16:37:48.976639   12465 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 16:37:49.170161   12465 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.606725166s)
	I0908 16:37:49.170291   12465 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0908 16:37:49.170095   12465 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.606707031s)
	I0908 16:37:49.172125   12465 node_ready.go:35] waiting up to 6m0s for node "addons-739733" to be "Ready" ...
	I0908 16:37:49.368521   12465 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 16:37:49.368609   12465 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 16:37:49.465053   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 16:37:49.663453   12465 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 16:37:49.663541   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 16:37:49.666338   12465 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 16:37:49.666421   12465 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 16:37:49.965602   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 16:37:49.974214   12465 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 16:37:49.974302   12465 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 16:37:49.976337   12465 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-739733" context rescaled to 1 replicas
	I0908 16:37:50.170651   12465 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 16:37:50.170691   12465 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 16:37:50.372405   12465 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 16:37:50.372509   12465 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 16:37:50.967504   12465 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 16:37:50.967613   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 16:37:51.176835   12465 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 16:37:51.176861   12465 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	W0908 16:37:51.281486   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:37:51.362411   12465 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 16:37:51.362445   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 16:37:51.480499   12465 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 16:37:51.480529   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0908 16:37:51.578176   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.80408562s)
	I0908 16:37:51.578254   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.794615014s)
	I0908 16:37:51.578302   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.715588353s)
	I0908 16:37:51.578564   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.711794178s)
	I0908 16:37:51.578638   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.706208323s)
	I0908 16:37:51.679260   12465 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 16:37:51.679291   12465 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 16:37:51.785495   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 16:37:52.076011   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.198485257s)
	I0908 16:37:52.076362   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.110239575s)
	I0908 16:37:52.874951   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.909218219s)
	I0908 16:37:52.874994   12465 addons.go:479] Verifying addon ingress=true in "addons-739733"
	I0908 16:37:52.875214   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.890865607s)
	I0908 16:37:52.875353   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.796394375s)
	W0908 16:37:52.875392   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:37:52.875413   12465 retry.go:31] will retry after 180.174999ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:37:52.875416   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.200199767s)
	I0908 16:37:52.875447   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.105596892s)
	I0908 16:37:52.875470   12465 addons.go:479] Verifying addon registry=true in "addons-739733"
	I0908 16:37:52.875494   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.410339962s)
	I0908 16:37:52.875448   12465 addons.go:479] Verifying addon metrics-server=true in "addons-739733"
	I0908 16:37:52.876828   12465 out.go:179] * Verifying ingress addon...
	I0908 16:37:52.877825   12465 out.go:179] * Verifying registry addon...
	I0908 16:37:52.877824   12465 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-739733 service yakd-dashboard -n yakd-dashboard
	
	I0908 16:37:52.879456   12465 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 16:37:52.880369   12465 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 16:37:52.882773   12465 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 16:37:52.882791   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:52.883016   12465 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 16:37:52.883028   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:53.056448   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:37:53.383113   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:53.383789   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 16:37:53.676012   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:37:53.883709   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:53.883737   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:54.103149   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.137405461s)
	W0908 16:37:54.103194   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 16:37:54.103224   12465 retry.go:31] will retry after 232.248929ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 16:37:54.103351   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.317761029s)
	I0908 16:37:54.103385   12465 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-739733"
	I0908 16:37:54.105752   12465 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 16:37:54.107581   12465 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 16:37:54.165273   12465 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 16:37:54.165304   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:37:54.335957   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 16:37:54.383038   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:54.383433   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:54.425679   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.36915093s)
	W0908 16:37:54.425750   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:37:54.425776   12465 retry.go:31] will retry after 327.667481ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:37:54.664518   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:37:54.754341   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:37:54.882783   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:54.882804   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:54.890479   12465 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 16:37:54.890541   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:54.908742   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:55.027939   12465 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 16:37:55.046191   12465 addons.go:238] Setting addon gcp-auth=true in "addons-739733"
	I0908 16:37:55.046246   12465 host.go:66] Checking if "addons-739733" exists ...
	I0908 16:37:55.046660   12465 cli_runner.go:164] Run: docker container inspect addons-739733 --format={{.State.Status}}
	I0908 16:37:55.068820   12465 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 16:37:55.068872   12465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-739733
	I0908 16:37:55.088465   12465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/addons-739733/id_rsa Username:docker}
	I0908 16:37:55.111792   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:37:55.383128   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:55.383357   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:55.611577   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:37:55.883021   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:55.883107   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:56.110919   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:37:56.176928   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:37:56.382659   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:56.382913   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:56.610608   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:37:56.883049   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:56.883088   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:56.897744   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.561744174s)
	I0908 16:37:56.897830   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.143455078s)
	W0908 16:37:56.897865   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:37:56.897883   12465 retry.go:31] will retry after 425.420565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:37:56.897922   12465 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.829081082s)
	I0908 16:37:56.899931   12465 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 16:37:56.901507   12465 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 16:37:56.902804   12465 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 16:37:56.902823   12465 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 16:37:56.921862   12465 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 16:37:56.921892   12465 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 16:37:56.938747   12465 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 16:37:56.938768   12465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 16:37:56.955285   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 16:37:57.111113   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:37:57.272643   12465 addons.go:479] Verifying addon gcp-auth=true in "addons-739733"
	I0908 16:37:57.274348   12465 out.go:179] * Verifying gcp-auth addon...
	I0908 16:37:57.276387   12465 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 16:37:57.279763   12465 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 16:37:57.279787   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:37:57.323823   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:37:57.383367   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:57.383427   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:57.611528   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:37:57.779328   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 16:37:57.853794   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:37:57.853820   12465 retry.go:31] will retry after 889.967267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:37:57.882719   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:57.883464   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:58.110222   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:37:58.279450   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:37:58.382196   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:58.382689   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:58.610480   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:37:58.674828   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:37:58.744981   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:37:58.779868   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:37:58.882969   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:58.883387   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:59.110803   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:37:59.276878   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:37:59.276923   12465 retry.go:31] will retry after 1.375583477s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:37:59.278873   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:37:59.382526   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:37:59.383015   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:59.611030   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:37:59.780068   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:37:59.882840   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:37:59.883017   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:00.110790   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:00.279312   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:00.383053   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:00.383099   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:00.610721   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:00.652836   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0908 16:38:00.675870   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:00.780204   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:00.883053   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:00.883268   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:01.110158   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:01.186652   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:01.186684   12465 retry.go:31] will retry after 1.424440217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:01.279277   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:01.383010   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:01.383163   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:01.611056   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:01.779390   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:01.883176   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:01.883283   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:02.110780   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:02.279740   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:02.382665   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:02.383040   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:02.610880   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:02.611937   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:02.779258   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:02.883048   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:02.883175   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:03.110873   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:03.136683   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:03.136708   12465 retry.go:31] will retry after 4.267147495s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 16:38:03.175085   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:03.279811   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:03.382575   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:03.383154   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:03.611478   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:03.779960   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:03.882907   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:03.883359   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:04.111108   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:04.278813   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:04.382488   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:04.383036   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:04.610512   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:04.779697   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:04.882485   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:04.882935   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:05.110494   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:05.279454   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:05.383083   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:05.383258   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:05.611532   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:05.674843   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:05.779338   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:05.882381   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:05.883040   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:06.110925   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:06.279953   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:06.382653   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:06.383297   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:06.611101   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:06.779078   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:06.882580   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:06.882810   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:07.110421   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:07.279717   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:07.382481   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:07.383021   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:07.404194   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:07.610758   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:07.675371   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:07.779423   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:07.882352   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:07.882919   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 16:38:07.928344   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:07.928375   12465 retry.go:31] will retry after 2.76476358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:08.111267   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:08.279319   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:08.383002   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:08.383015   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:08.610727   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:08.779839   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:08.882667   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:08.883190   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:09.110993   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:09.278855   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:09.382604   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:09.383164   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:09.611310   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:09.675818   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:09.779504   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:09.882186   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:09.882916   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:10.110755   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:10.279480   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:10.382455   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:10.382741   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:10.610458   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:10.693920   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:10.780474   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:10.882410   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:10.883176   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:11.110781   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:11.224789   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:11.224823   12465 retry.go:31] will retry after 3.366838357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:11.279615   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:11.382292   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:11.382968   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:11.610929   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:11.779921   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:11.882936   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:11.883446   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:12.111236   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:12.175668   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:12.279190   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:12.382674   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:12.382767   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:12.611852   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:12.779237   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:12.883088   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:12.883202   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:13.110939   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:13.278957   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:13.382586   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:13.383213   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:13.611382   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:13.779358   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:13.882876   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:13.882928   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:14.110682   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:14.279330   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:14.382829   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:14.382964   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:14.592256   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:14.611089   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:14.675579   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:14.778905   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:14.882077   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:14.882814   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:15.110699   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:15.114749   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:15.114794   12465 retry.go:31] will retry after 7.023611906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:15.281706   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:15.382301   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:15.382976   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:15.610585   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:15.779910   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:15.882723   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:15.882899   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:16.110187   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:16.279031   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:16.382714   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:16.383351   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:16.611303   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:16.779210   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:16.883381   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:16.883388   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:17.110784   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:17.175068   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:17.279865   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:17.382552   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:17.383241   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:17.610968   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:17.778832   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:17.882741   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:17.883136   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:18.110891   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:18.279862   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:18.382547   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:18.383258   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:18.611150   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:18.779300   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:18.882875   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:18.882985   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:19.110320   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:19.279102   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:19.382755   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:19.383022   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:19.610700   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:19.675080   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:19.779900   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:19.882594   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:19.883022   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:20.110926   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:20.279219   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:20.383209   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:20.383257   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:20.611087   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:20.779661   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:20.882666   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:20.883379   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:21.111047   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:21.279114   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:21.382900   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:21.383031   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:21.610974   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:21.675255   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:21.779825   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:21.882694   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:21.882986   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:22.111572   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:22.138622   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:22.279012   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:22.382722   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:22.382798   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:22.610939   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:22.687247   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:22.687278   12465 retry.go:31] will retry after 8.726891379s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:22.779884   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:22.883742   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:22.883888   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:23.110452   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:23.279709   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:23.382605   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:23.383137   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:23.611320   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:23.779473   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:23.883691   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:23.883761   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:24.110519   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:24.174912   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:24.279374   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:24.383176   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:24.383370   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:24.611195   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:24.779560   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:24.882072   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:24.882936   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:25.110920   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:25.279184   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:25.382852   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:25.383016   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:25.610849   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:25.779179   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:25.883190   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:25.883266   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:26.110864   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:26.175469   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:26.279789   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:26.382585   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:26.383188   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:26.610966   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:26.778932   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:26.882461   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:26.882615   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:27.110054   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:27.279305   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:27.383113   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:27.383191   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:27.610922   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:27.779501   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:27.882091   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:27.882726   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:28.110542   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:28.279961   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:28.382801   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:28.383287   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:28.611005   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:28.675487   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:28.778850   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:28.882536   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:28.883157   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:29.111053   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:29.279442   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:29.382584   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:29.383201   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:29.610904   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:29.780229   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:29.882937   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:29.882991   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:30.110763   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:30.279688   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:30.382590   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:30.382913   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:30.611121   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 16:38:30.675611   12465 node_ready.go:57] node "addons-739733" has "Ready":"False" status (will retry)
	I0908 16:38:30.779927   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:30.882626   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:30.882749   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:31.110650   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:31.279666   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:31.382619   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:31.383177   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:31.415380   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:31.668300   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:31.676072   12465 node_ready.go:49] node "addons-739733" is "Ready"
	I0908 16:38:31.676175   12465 node_ready.go:38] duration metric: took 42.503920331s for node "addons-739733" to be "Ready" ...
	I0908 16:38:31.676211   12465 api_server.go:52] waiting for apiserver process to appear ...
	I0908 16:38:31.676329   12465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 16:38:31.780862   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:31.883446   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:31.883583   12465 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 16:38:31.883607   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:32.164318   12465 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 16:38:32.164346   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:32.279694   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:32.385283   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:32.386684   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:32.665145   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:32.783768   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:32.897227   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:32.897786   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:32.969191   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.553765313s)
	W0908 16:38:32.969238   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:32.969261   12465 retry.go:31] will retry after 15.677707185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:32.969299   12465 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.292941821s)
	I0908 16:38:32.969319   12465 api_server.go:72] duration metric: took 45.812111638s to wait for apiserver process to appear ...
	I0908 16:38:32.969330   12465 api_server.go:88] waiting for apiserver healthz status ...
	I0908 16:38:32.969347   12465 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0908 16:38:32.974211   12465 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0908 16:38:32.975168   12465 api_server.go:141] control plane version: v1.34.0
	I0908 16:38:32.975193   12465 api_server.go:131] duration metric: took 5.85539ms to wait for apiserver health ...
	I0908 16:38:32.975204   12465 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 16:38:32.998727   12465 system_pods.go:59] 20 kube-system pods found
	I0908 16:38:32.998765   12465 system_pods.go:61] "amd-gpu-device-plugin-4rtmc" [856bd7fb-7aa1-41c4-9327-3ec267b88a61] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 16:38:32.998776   12465 system_pods.go:61] "coredns-66bc5c9577-tb4lv" [b7069615-df7f-4eb6-a534-ac426cc9dc17] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 16:38:32.998785   12465 system_pods.go:61] "csi-hostpath-attacher-0" [c5ba5b04-4ec3-45d9-938d-cad711466c07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 16:38:32.998794   12465 system_pods.go:61] "csi-hostpath-resizer-0" [eb8b0824-982d-418b-bd0f-adb16e71e460] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 16:38:32.998803   12465 system_pods.go:61] "csi-hostpathplugin-cdrvm" [872b9089-b0c6-4e8b-92a2-09480cd84917] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 16:38:32.998810   12465 system_pods.go:61] "etcd-addons-739733" [d160bea1-d669-483b-9b83-31e7c8dacc6d] Running
	I0908 16:38:32.998817   12465 system_pods.go:61] "kindnet-k4fpd" [e3dd9a33-c805-43cb-93d5-22d438a28df3] Running
	I0908 16:38:32.998829   12465 system_pods.go:61] "kube-apiserver-addons-739733" [010c4f53-6d2f-4f72-908b-2d790daed7db] Running
	I0908 16:38:32.998835   12465 system_pods.go:61] "kube-controller-manager-addons-739733" [b0b65b82-7b86-497c-a860-827a9bdb4486] Running
	I0908 16:38:32.998844   12465 system_pods.go:61] "kube-ingress-dns-minikube" [e16e4e72-2a3d-44e4-b49d-145723f28bcd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 16:38:32.998850   12465 system_pods.go:61] "kube-proxy-574p4" [5e90bf52-73dd-4fb8-b8dd-aaccb8e5b931] Running
	I0908 16:38:32.998855   12465 system_pods.go:61] "kube-scheduler-addons-739733" [dc2ca285-2505-44a4-a41c-2841f739985b] Running
	I0908 16:38:32.998862   12465 system_pods.go:61] "metrics-server-85b7d694d7-cgkpr" [972f14e5-8beb-4f18-9642-a40844fe5820] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 16:38:32.998868   12465 system_pods.go:61] "nvidia-device-plugin-daemonset-7gcdp" [e50230cb-dc63-4d6e-86bc-25517ebbaced] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 16:38:32.998879   12465 system_pods.go:61] "registry-66898fdd98-v7wsv" [2000242d-23c3-4a44-8db8-efd30c1097d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 16:38:32.998885   12465 system_pods.go:61] "registry-creds-764b6fb674-tkbbk" [44f34945-50d4-4c35-bebd-d456d959aa02] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 16:38:32.998892   12465 system_pods.go:61] "registry-proxy-wstmd" [e7772012-dac5-420a-94cf-1bf5e180f021] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 16:38:32.998902   12465 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4gzml" [67864115-2e8e-4e3e-a2f3-f85114628bcd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:32.998908   12465 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rrn4r" [5258bfdd-1f2a-41d7-ac4b-acdf7be0a825] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:32.998916   12465 system_pods.go:61] "storage-provisioner" [71e67b7f-6490-4df8-94e6-f0e959e86e2d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 16:38:32.998927   12465 system_pods.go:74] duration metric: took 23.716301ms to wait for pod list to return data ...
	I0908 16:38:32.998943   12465 default_sa.go:34] waiting for default service account to be created ...
	I0908 16:38:33.001347   12465 default_sa.go:45] found service account: "default"
	I0908 16:38:33.001371   12465 default_sa.go:55] duration metric: took 2.420771ms for default service account to be created ...
	I0908 16:38:33.001381   12465 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 16:38:33.004170   12465 system_pods.go:86] 20 kube-system pods found
	I0908 16:38:33.004199   12465 system_pods.go:89] "amd-gpu-device-plugin-4rtmc" [856bd7fb-7aa1-41c4-9327-3ec267b88a61] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 16:38:33.004207   12465 system_pods.go:89] "coredns-66bc5c9577-tb4lv" [b7069615-df7f-4eb6-a534-ac426cc9dc17] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 16:38:33.004214   12465 system_pods.go:89] "csi-hostpath-attacher-0" [c5ba5b04-4ec3-45d9-938d-cad711466c07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 16:38:33.004219   12465 system_pods.go:89] "csi-hostpath-resizer-0" [eb8b0824-982d-418b-bd0f-adb16e71e460] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 16:38:33.004226   12465 system_pods.go:89] "csi-hostpathplugin-cdrvm" [872b9089-b0c6-4e8b-92a2-09480cd84917] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 16:38:33.004231   12465 system_pods.go:89] "etcd-addons-739733" [d160bea1-d669-483b-9b83-31e7c8dacc6d] Running
	I0908 16:38:33.004235   12465 system_pods.go:89] "kindnet-k4fpd" [e3dd9a33-c805-43cb-93d5-22d438a28df3] Running
	I0908 16:38:33.004238   12465 system_pods.go:89] "kube-apiserver-addons-739733" [010c4f53-6d2f-4f72-908b-2d790daed7db] Running
	I0908 16:38:33.004242   12465 system_pods.go:89] "kube-controller-manager-addons-739733" [b0b65b82-7b86-497c-a860-827a9bdb4486] Running
	I0908 16:38:33.004275   12465 system_pods.go:89] "kube-ingress-dns-minikube" [e16e4e72-2a3d-44e4-b49d-145723f28bcd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 16:38:33.004283   12465 system_pods.go:89] "kube-proxy-574p4" [5e90bf52-73dd-4fb8-b8dd-aaccb8e5b931] Running
	I0908 16:38:33.004287   12465 system_pods.go:89] "kube-scheduler-addons-739733" [dc2ca285-2505-44a4-a41c-2841f739985b] Running
	I0908 16:38:33.004292   12465 system_pods.go:89] "metrics-server-85b7d694d7-cgkpr" [972f14e5-8beb-4f18-9642-a40844fe5820] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 16:38:33.004297   12465 system_pods.go:89] "nvidia-device-plugin-daemonset-7gcdp" [e50230cb-dc63-4d6e-86bc-25517ebbaced] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 16:38:33.004306   12465 system_pods.go:89] "registry-66898fdd98-v7wsv" [2000242d-23c3-4a44-8db8-efd30c1097d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 16:38:33.004320   12465 system_pods.go:89] "registry-creds-764b6fb674-tkbbk" [44f34945-50d4-4c35-bebd-d456d959aa02] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 16:38:33.004330   12465 system_pods.go:89] "registry-proxy-wstmd" [e7772012-dac5-420a-94cf-1bf5e180f021] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 16:38:33.004334   12465 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gzml" [67864115-2e8e-4e3e-a2f3-f85114628bcd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:33.004340   12465 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrn4r" [5258bfdd-1f2a-41d7-ac4b-acdf7be0a825] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:33.004345   12465 system_pods.go:89] "storage-provisioner" [71e67b7f-6490-4df8-94e6-f0e959e86e2d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 16:38:33.004373   12465 retry.go:31] will retry after 205.046025ms: missing components: kube-dns
	I0908 16:38:33.110218   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:33.214470   12465 system_pods.go:86] 20 kube-system pods found
	I0908 16:38:33.214501   12465 system_pods.go:89] "amd-gpu-device-plugin-4rtmc" [856bd7fb-7aa1-41c4-9327-3ec267b88a61] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 16:38:33.214510   12465 system_pods.go:89] "coredns-66bc5c9577-tb4lv" [b7069615-df7f-4eb6-a534-ac426cc9dc17] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 16:38:33.214517   12465 system_pods.go:89] "csi-hostpath-attacher-0" [c5ba5b04-4ec3-45d9-938d-cad711466c07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 16:38:33.214523   12465 system_pods.go:89] "csi-hostpath-resizer-0" [eb8b0824-982d-418b-bd0f-adb16e71e460] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 16:38:33.214529   12465 system_pods.go:89] "csi-hostpathplugin-cdrvm" [872b9089-b0c6-4e8b-92a2-09480cd84917] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 16:38:33.214535   12465 system_pods.go:89] "etcd-addons-739733" [d160bea1-d669-483b-9b83-31e7c8dacc6d] Running
	I0908 16:38:33.214539   12465 system_pods.go:89] "kindnet-k4fpd" [e3dd9a33-c805-43cb-93d5-22d438a28df3] Running
	I0908 16:38:33.214542   12465 system_pods.go:89] "kube-apiserver-addons-739733" [010c4f53-6d2f-4f72-908b-2d790daed7db] Running
	I0908 16:38:33.214548   12465 system_pods.go:89] "kube-controller-manager-addons-739733" [b0b65b82-7b86-497c-a860-827a9bdb4486] Running
	I0908 16:38:33.214554   12465 system_pods.go:89] "kube-ingress-dns-minikube" [e16e4e72-2a3d-44e4-b49d-145723f28bcd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 16:38:33.214561   12465 system_pods.go:89] "kube-proxy-574p4" [5e90bf52-73dd-4fb8-b8dd-aaccb8e5b931] Running
	I0908 16:38:33.214564   12465 system_pods.go:89] "kube-scheduler-addons-739733" [dc2ca285-2505-44a4-a41c-2841f739985b] Running
	I0908 16:38:33.214569   12465 system_pods.go:89] "metrics-server-85b7d694d7-cgkpr" [972f14e5-8beb-4f18-9642-a40844fe5820] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 16:38:33.214574   12465 system_pods.go:89] "nvidia-device-plugin-daemonset-7gcdp" [e50230cb-dc63-4d6e-86bc-25517ebbaced] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 16:38:33.214586   12465 system_pods.go:89] "registry-66898fdd98-v7wsv" [2000242d-23c3-4a44-8db8-efd30c1097d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 16:38:33.214593   12465 system_pods.go:89] "registry-creds-764b6fb674-tkbbk" [44f34945-50d4-4c35-bebd-d456d959aa02] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 16:38:33.214599   12465 system_pods.go:89] "registry-proxy-wstmd" [e7772012-dac5-420a-94cf-1bf5e180f021] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 16:38:33.214606   12465 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gzml" [67864115-2e8e-4e3e-a2f3-f85114628bcd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:33.214611   12465 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrn4r" [5258bfdd-1f2a-41d7-ac4b-acdf7be0a825] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:33.214618   12465 system_pods.go:89] "storage-provisioner" [71e67b7f-6490-4df8-94e6-f0e959e86e2d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 16:38:33.214631   12465 retry.go:31] will retry after 371.163169ms: missing components: kube-dns
	I0908 16:38:33.279424   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:33.383082   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:33.383104   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:33.590787   12465 system_pods.go:86] 20 kube-system pods found
	I0908 16:38:33.590826   12465 system_pods.go:89] "amd-gpu-device-plugin-4rtmc" [856bd7fb-7aa1-41c4-9327-3ec267b88a61] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 16:38:33.590837   12465 system_pods.go:89] "coredns-66bc5c9577-tb4lv" [b7069615-df7f-4eb6-a534-ac426cc9dc17] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 16:38:33.590846   12465 system_pods.go:89] "csi-hostpath-attacher-0" [c5ba5b04-4ec3-45d9-938d-cad711466c07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 16:38:33.590854   12465 system_pods.go:89] "csi-hostpath-resizer-0" [eb8b0824-982d-418b-bd0f-adb16e71e460] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 16:38:33.590866   12465 system_pods.go:89] "csi-hostpathplugin-cdrvm" [872b9089-b0c6-4e8b-92a2-09480cd84917] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 16:38:33.590888   12465 system_pods.go:89] "etcd-addons-739733" [d160bea1-d669-483b-9b83-31e7c8dacc6d] Running
	I0908 16:38:33.590899   12465 system_pods.go:89] "kindnet-k4fpd" [e3dd9a33-c805-43cb-93d5-22d438a28df3] Running
	I0908 16:38:33.590904   12465 system_pods.go:89] "kube-apiserver-addons-739733" [010c4f53-6d2f-4f72-908b-2d790daed7db] Running
	I0908 16:38:33.590914   12465 system_pods.go:89] "kube-controller-manager-addons-739733" [b0b65b82-7b86-497c-a860-827a9bdb4486] Running
	I0908 16:38:33.590925   12465 system_pods.go:89] "kube-ingress-dns-minikube" [e16e4e72-2a3d-44e4-b49d-145723f28bcd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 16:38:33.590933   12465 system_pods.go:89] "kube-proxy-574p4" [5e90bf52-73dd-4fb8-b8dd-aaccb8e5b931] Running
	I0908 16:38:33.590941   12465 system_pods.go:89] "kube-scheduler-addons-739733" [dc2ca285-2505-44a4-a41c-2841f739985b] Running
	I0908 16:38:33.590952   12465 system_pods.go:89] "metrics-server-85b7d694d7-cgkpr" [972f14e5-8beb-4f18-9642-a40844fe5820] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 16:38:33.590961   12465 system_pods.go:89] "nvidia-device-plugin-daemonset-7gcdp" [e50230cb-dc63-4d6e-86bc-25517ebbaced] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 16:38:33.590971   12465 system_pods.go:89] "registry-66898fdd98-v7wsv" [2000242d-23c3-4a44-8db8-efd30c1097d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 16:38:33.590979   12465 system_pods.go:89] "registry-creds-764b6fb674-tkbbk" [44f34945-50d4-4c35-bebd-d456d959aa02] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 16:38:33.590988   12465 system_pods.go:89] "registry-proxy-wstmd" [e7772012-dac5-420a-94cf-1bf5e180f021] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 16:38:33.590996   12465 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gzml" [67864115-2e8e-4e3e-a2f3-f85114628bcd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:33.591003   12465 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrn4r" [5258bfdd-1f2a-41d7-ac4b-acdf7be0a825] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:33.591010   12465 system_pods.go:89] "storage-provisioner" [71e67b7f-6490-4df8-94e6-f0e959e86e2d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 16:38:33.591028   12465 retry.go:31] will retry after 468.0449ms: missing components: kube-dns
	I0908 16:38:33.611154   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:33.779719   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:33.882631   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:33.882880   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:34.064756   12465 system_pods.go:86] 20 kube-system pods found
	I0908 16:38:34.064794   12465 system_pods.go:89] "amd-gpu-device-plugin-4rtmc" [856bd7fb-7aa1-41c4-9327-3ec267b88a61] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 16:38:34.064804   12465 system_pods.go:89] "coredns-66bc5c9577-tb4lv" [b7069615-df7f-4eb6-a534-ac426cc9dc17] Running
	I0908 16:38:34.064815   12465 system_pods.go:89] "csi-hostpath-attacher-0" [c5ba5b04-4ec3-45d9-938d-cad711466c07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 16:38:34.064822   12465 system_pods.go:89] "csi-hostpath-resizer-0" [eb8b0824-982d-418b-bd0f-adb16e71e460] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 16:38:34.064834   12465 system_pods.go:89] "csi-hostpathplugin-cdrvm" [872b9089-b0c6-4e8b-92a2-09480cd84917] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 16:38:34.064840   12465 system_pods.go:89] "etcd-addons-739733" [d160bea1-d669-483b-9b83-31e7c8dacc6d] Running
	I0908 16:38:34.064850   12465 system_pods.go:89] "kindnet-k4fpd" [e3dd9a33-c805-43cb-93d5-22d438a28df3] Running
	I0908 16:38:34.064855   12465 system_pods.go:89] "kube-apiserver-addons-739733" [010c4f53-6d2f-4f72-908b-2d790daed7db] Running
	I0908 16:38:34.064865   12465 system_pods.go:89] "kube-controller-manager-addons-739733" [b0b65b82-7b86-497c-a860-827a9bdb4486] Running
	I0908 16:38:34.064873   12465 system_pods.go:89] "kube-ingress-dns-minikube" [e16e4e72-2a3d-44e4-b49d-145723f28bcd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 16:38:34.064881   12465 system_pods.go:89] "kube-proxy-574p4" [5e90bf52-73dd-4fb8-b8dd-aaccb8e5b931] Running
	I0908 16:38:34.064887   12465 system_pods.go:89] "kube-scheduler-addons-739733" [dc2ca285-2505-44a4-a41c-2841f739985b] Running
	I0908 16:38:34.064897   12465 system_pods.go:89] "metrics-server-85b7d694d7-cgkpr" [972f14e5-8beb-4f18-9642-a40844fe5820] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 16:38:34.064909   12465 system_pods.go:89] "nvidia-device-plugin-daemonset-7gcdp" [e50230cb-dc63-4d6e-86bc-25517ebbaced] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 16:38:34.064922   12465 system_pods.go:89] "registry-66898fdd98-v7wsv" [2000242d-23c3-4a44-8db8-efd30c1097d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 16:38:34.064933   12465 system_pods.go:89] "registry-creds-764b6fb674-tkbbk" [44f34945-50d4-4c35-bebd-d456d959aa02] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 16:38:34.064945   12465 system_pods.go:89] "registry-proxy-wstmd" [e7772012-dac5-420a-94cf-1bf5e180f021] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 16:38:34.064962   12465 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gzml" [67864115-2e8e-4e3e-a2f3-f85114628bcd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:34.064975   12465 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrn4r" [5258bfdd-1f2a-41d7-ac4b-acdf7be0a825] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:34.064981   12465 system_pods.go:89] "storage-provisioner" [71e67b7f-6490-4df8-94e6-f0e959e86e2d] Running
	I0908 16:38:34.064997   12465 system_pods.go:126] duration metric: took 1.063609048s to wait for k8s-apps to be running ...
	I0908 16:38:34.065008   12465 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 16:38:34.065063   12465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 16:38:34.078804   12465 system_svc.go:56] duration metric: took 13.787428ms WaitForService to wait for kubelet
	I0908 16:38:34.078839   12465 kubeadm.go:578] duration metric: took 46.921629059s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 16:38:34.078864   12465 node_conditions.go:102] verifying NodePressure condition ...
	I0908 16:38:34.081334   12465 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 16:38:34.081361   12465 node_conditions.go:123] node cpu capacity is 8
	I0908 16:38:34.081376   12465 node_conditions.go:105] duration metric: took 2.506061ms to run NodePressure ...
	I0908 16:38:34.081389   12465 start.go:241] waiting for startup goroutines ...
	I0908 16:38:34.111123   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:34.279878   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:34.382686   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:34.383332   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:34.611224   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:34.779987   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:34.883081   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:34.883216   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:35.111619   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:35.279237   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:35.383327   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:35.383586   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:35.610846   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:35.779327   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:35.882539   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:35.882794   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:36.110679   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:36.279204   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:36.383353   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:36.383356   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:36.611369   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:36.780203   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:36.882928   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:36.883007   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:37.111197   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:37.280183   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:37.383048   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:37.383137   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:37.610658   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:37.779337   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:37.882993   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:37.883058   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:38.111104   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:38.279715   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:38.383141   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:38.383163   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:38.610942   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:38.779583   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:38.882361   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:38.883104   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:39.111277   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:39.279923   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:39.382530   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:39.383124   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:39.611628   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:39.779382   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:39.883510   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:39.883553   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:40.111775   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:40.279368   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:40.383442   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:40.383517   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:40.611434   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:40.781183   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:40.894522   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:40.894790   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:41.111514   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:41.279464   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:41.383317   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:41.383368   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:41.611687   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:41.779273   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:41.883382   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:41.883396   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:42.111303   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:42.279915   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:42.382755   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:42.383623   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:42.610653   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:42.779335   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:42.882822   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:42.883005   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:43.111148   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:43.279726   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:43.382526   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:43.383163   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:43.612078   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:43.779649   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:43.882668   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:43.883089   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:44.165204   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:44.279818   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:44.383182   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:44.384077   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:44.663989   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:44.779904   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:44.883576   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:44.883621   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:45.164054   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:45.279735   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:45.382806   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:45.383232   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:45.664795   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:45.779443   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:45.882490   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:45.882896   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:46.111202   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:46.279883   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:46.382587   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:46.383242   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:46.611261   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:46.779764   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:46.882972   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:46.883834   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:47.111357   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:47.279158   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:47.383477   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:47.383504   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:47.611989   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:47.779910   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:47.882868   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:47.883222   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:48.111570   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:48.279986   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:48.383083   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:48.383703   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:48.611054   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:48.648063   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:48.779906   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:48.883014   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:48.883301   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:49.164373   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:49.282188   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:49.383418   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:49.383554   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:49.664683   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:49.779696   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:49.883224   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:49.883406   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:50.087960   12465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.43985555s)
	W0908 16:38:50.088013   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:50.088041   12465 retry.go:31] will retry after 34.835609554s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:50.163212   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:50.280120   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:50.383145   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:50.383177   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:50.612676   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:50.779951   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:50.883413   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:50.883483   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:51.111119   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:51.279955   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:51.382930   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:51.383337   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:51.611235   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:51.779912   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:51.882622   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:51.883419   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:52.111556   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:52.279149   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:52.382967   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:52.383078   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:52.611368   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:52.779996   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:52.882698   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:52.882808   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:53.111301   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:53.280178   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:53.382941   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:53.382976   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:53.611056   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:53.780485   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:53.882816   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:53.883149   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:54.111494   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:54.278798   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:54.382831   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:54.383525   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:54.610541   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:54.779779   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:54.882623   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:54.883163   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:55.111649   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:55.279548   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:55.382557   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:55.383509   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:55.610514   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:55.779143   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:55.883339   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:55.883420   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:56.111227   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:56.279883   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:56.382758   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:56.383483   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:56.610681   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:56.779678   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:56.882899   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:56.883394   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:57.111725   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:57.279463   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:57.383204   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:57.383225   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:57.611320   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:57.780282   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:57.882963   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:57.883003   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:58.111021   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:58.279749   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:58.382709   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:58.383123   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:58.611152   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:58.779638   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:58.882403   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:58.883126   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:59.111533   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:59.279501   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:59.383245   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:59.383275   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:59.611216   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:59.779916   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:59.882713   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:59.883194   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:00.111309   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:00.280169   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:00.383162   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:00.383212   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:00.611050   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:00.780275   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:00.883517   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:00.883553   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:01.164594   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:01.279913   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:01.383380   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:01.383383   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:01.611489   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:01.779575   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:01.883443   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:01.883487   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:02.111675   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:02.279181   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:02.383122   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:02.383161   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:02.611290   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:02.780145   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:02.883199   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:02.883631   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:03.111199   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:03.280146   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:03.382926   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:03.382938   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:03.611233   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:03.780127   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:03.882842   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:03.883280   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:04.111463   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:04.279047   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:04.383033   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:04.383640   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:04.613150   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:04.779730   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:04.882482   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:04.882930   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:05.111539   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:05.280243   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:05.382636   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:05.382836   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:05.610531   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:05.780535   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:05.883217   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:05.883274   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:06.111583   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:06.278859   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:06.382882   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:06.383304   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:06.611355   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:06.780352   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:06.883087   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:06.883307   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:07.111261   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:07.362284   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:07.383284   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:07.383286   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:07.611468   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:07.779938   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:07.882989   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:07.883185   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:08.111714   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:08.279373   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:08.383382   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:08.383532   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:08.611414   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:08.780128   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:08.882856   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:08.883057   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:09.111325   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:09.281957   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:09.383023   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:09.383667   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:09.610834   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:09.779731   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:09.882579   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:09.883454   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:10.111610   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:10.278936   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:10.382611   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:10.383541   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:10.610417   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:10.779421   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:10.882493   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:10.883037   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:11.111130   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:11.280087   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:11.382932   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:11.383064   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:11.611609   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:11.779333   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:11.883239   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:11.883312   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:12.111376   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:12.280239   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:12.383047   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:12.383173   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:12.611260   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:12.780058   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:12.882827   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:12.882848   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:13.110989   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:13.280085   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:13.382950   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:13.383419   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:13.611803   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:13.779352   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:13.883034   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:13.883047   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:14.111502   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:14.278902   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:14.382783   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:14.383610   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:14.610745   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:14.779663   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:14.882955   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:14.883793   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:15.165266   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:15.279243   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:15.463670   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:15.464551   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:15.664491   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:15.780740   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:15.885191   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:15.885654   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:16.166444   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:16.363962   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:16.385478   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:16.386027   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:16.665440   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:16.780530   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:16.883420   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:16.883827   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:17.111202   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:17.280332   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:17.383560   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:17.383623   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:17.610889   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:17.779609   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:17.882354   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:17.883006   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:18.111039   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:18.279913   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:18.382831   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:18.383256   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:18.611335   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:18.780594   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:18.882716   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:18.883274   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:19.111943   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:19.279686   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:19.382908   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:19.383134   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:19.611487   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:19.780487   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:19.883502   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:19.883537   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:20.111227   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:20.280059   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:20.382942   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:20.383177   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:20.611519   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:20.779855   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:20.882613   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:20.883431   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:21.111375   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:21.279813   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:21.383284   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:21.383315   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:21.611686   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:21.779575   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:21.882665   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:21.883017   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:22.111627   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:22.279570   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:22.382560   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:22.383008   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:22.611408   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:22.780152   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:22.882713   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:22.883155   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:23.111542   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:23.278875   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:23.383478   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:23.383540   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:23.610603   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:23.781570   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:23.883791   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:23.883876   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:24.164427   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:24.297341   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:24.480822   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:24.481034   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:24.610945   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:24.779693   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:24.882629   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:24.883004   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:24.924185   12465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:39:25.112379   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:25.282099   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:25.383638   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:25.383718   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 16:39:25.500268   12465 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 16:39:25.500403   12465 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 16:39:25.611398   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:25.779999   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:25.883045   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:25.883075   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:26.111384   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:26.280079   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:26.382711   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:26.382890   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:26.611361   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:26.780212   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:26.883227   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:26.883357   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:27.111654   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:27.279727   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:27.383051   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:27.383212   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:27.611153   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:27.779875   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:27.882722   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:27.883215   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:28.111235   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:28.282041   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:28.382459   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:28.383002   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:28.611129   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:28.779613   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:28.882618   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:28.883225   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:29.111268   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:29.279807   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:29.383347   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:29.383384   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:29.611751   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:29.779241   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:29.882860   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:29.882944   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:30.110851   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:30.279557   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:30.383413   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:30.383506   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:30.611431   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:30.780383   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:30.882277   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:30.882700   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:31.110753   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:31.279578   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:31.382656   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:31.382977   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:31.611060   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:31.780087   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:31.882996   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:31.882999   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:32.110755   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:32.279278   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:32.383008   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:32.383106   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:32.611062   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:32.779665   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:32.882467   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:32.883116   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:33.111234   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:33.279841   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:33.383002   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:33.383204   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:33.611408   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:33.780162   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:33.883096   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:33.883106   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:34.111388   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:34.279911   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:34.383058   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:34.383105   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:34.611375   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:34.779045   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:34.882820   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:34.883362   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:35.111740   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:35.279603   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:35.385023   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:35.385143   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:35.611455   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:35.780157   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:35.883977   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:35.884023   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:36.111081   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:36.279971   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:36.382912   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:36.382922   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:36.611123   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:36.779702   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:36.882816   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:36.883290   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:37.111826   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:37.279781   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:37.383056   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:37.383469   12465 kapi.go:107] duration metric: took 1m44.503098699s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 16:39:37.610572   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:37.778823   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:37.882587   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:38.110394   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:38.280382   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:38.383138   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:38.611330   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:38.780146   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:38.883242   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:39.165384   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:39.279832   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:39.382492   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:39.664850   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:39.779561   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:39.882256   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:40.111405   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:40.280420   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:40.383148   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:40.611613   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:40.780679   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:40.882821   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:41.111096   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:41.279885   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:41.382916   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:41.610829   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:41.779841   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:41.884164   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:42.111516   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:42.279996   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:42.383158   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:42.611662   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:42.779864   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:42.883090   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:43.111138   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:43.279060   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:43.382819   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:43.610786   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:43.779193   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:43.882959   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:44.110604   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:44.281874   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:44.382604   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:44.610867   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:44.779944   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:44.883157   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:45.111360   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:45.280318   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:45.383440   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:45.663817   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:45.779620   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:45.882938   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:46.111623   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:46.279361   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:46.383514   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:46.665432   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:46.780231   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:46.883269   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:47.111622   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:47.283808   12465 kapi.go:107] duration metric: took 1m50.007418228s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 16:39:47.285631   12465 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-739733 cluster.
	I0908 16:39:47.287169   12465 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 16:39:47.363585   12465 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 16:39:47.385085   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:47.764608   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:47.882846   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:48.165506   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:48.383163   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:48.663943   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:48.883274   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:49.165621   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:49.382557   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:49.611933   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:49.883051   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:50.111031   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:50.383080   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:50.611926   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:50.884110   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:51.111175   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:51.383290   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:51.712885   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:51.882995   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:52.111391   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:52.383785   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:52.610724   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:52.882756   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:53.110861   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:53.382346   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:53.611713   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:53.882728   12465 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:54.163973   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:54.384826   12465 kapi.go:107] duration metric: took 2m1.505366809s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 16:39:54.610825   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:55.111662   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:55.665640   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:56.111942   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:56.611290   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:57.111176   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:57.612148   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:58.110893   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:58.611485   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:59.111520   12465 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:59.610964   12465 kapi.go:107] duration metric: took 2m5.503378306s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 16:39:59.612901   12465 out.go:179] * Enabled addons: ingress-dns, amd-gpu-device-plugin, cloud-spanner, registry-creds, default-storageclass, nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0908 16:39:59.614466   12465 addons.go:514] duration metric: took 2m12.45704306s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin cloud-spanner registry-creds default-storageclass nvidia-device-plugin storage-provisioner-rancher storage-provisioner metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0908 16:39:59.614511   12465 start.go:246] waiting for cluster config update ...
	I0908 16:39:59.614530   12465 start.go:255] writing updated cluster config ...
	I0908 16:39:59.614806   12465 ssh_runner.go:195] Run: rm -f paused
	I0908 16:39:59.618882   12465 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 16:39:59.622243   12465 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tb4lv" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:39:59.626209   12465 pod_ready.go:94] pod "coredns-66bc5c9577-tb4lv" is "Ready"
	I0908 16:39:59.626233   12465 pod_ready.go:86] duration metric: took 3.966367ms for pod "coredns-66bc5c9577-tb4lv" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:39:59.628101   12465 pod_ready.go:83] waiting for pod "etcd-addons-739733" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:39:59.631963   12465 pod_ready.go:94] pod "etcd-addons-739733" is "Ready"
	I0908 16:39:59.631983   12465 pod_ready.go:86] duration metric: took 3.859743ms for pod "etcd-addons-739733" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:39:59.633954   12465 pod_ready.go:83] waiting for pod "kube-apiserver-addons-739733" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:39:59.637767   12465 pod_ready.go:94] pod "kube-apiserver-addons-739733" is "Ready"
	I0908 16:39:59.637793   12465 pod_ready.go:86] duration metric: took 3.815851ms for pod "kube-apiserver-addons-739733" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:39:59.639553   12465 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-739733" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:00.022402   12465 pod_ready.go:94] pod "kube-controller-manager-addons-739733" is "Ready"
	I0908 16:40:00.022429   12465 pod_ready.go:86] duration metric: took 382.855408ms for pod "kube-controller-manager-addons-739733" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:00.222930   12465 pod_ready.go:83] waiting for pod "kube-proxy-574p4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:00.622896   12465 pod_ready.go:94] pod "kube-proxy-574p4" is "Ready"
	I0908 16:40:00.622920   12465 pod_ready.go:86] duration metric: took 399.964966ms for pod "kube-proxy-574p4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:00.823143   12465 pod_ready.go:83] waiting for pod "kube-scheduler-addons-739733" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:01.223125   12465 pod_ready.go:94] pod "kube-scheduler-addons-739733" is "Ready"
	I0908 16:40:01.223151   12465 pod_ready.go:86] duration metric: took 399.985021ms for pod "kube-scheduler-addons-739733" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:01.223163   12465 pod_ready.go:40] duration metric: took 1.604237906s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 16:40:01.278328   12465 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 16:40:01.280284   12465 out.go:179] * Done! kubectl is now configured to use "addons-739733" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.097418992Z" level=info msg="Removed pod sandbox: 20996bafdbfd4e84d49498e3dae13fea3248568611adbd868aa62a7d1214a16a" id=40f080f6-27fa-43e3-810b-6e204219d964 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.097915346Z" level=info msg="Stopping pod sandbox: 812273bd0d31aeb266c44d841484c682c64587e35af8fdc471f26dd9c0e41d51" id=e137ffaf-8da1-4816-8353-eb9a4b9e351e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.097955030Z" level=info msg="Stopped pod sandbox (already stopped): 812273bd0d31aeb266c44d841484c682c64587e35af8fdc471f26dd9c0e41d51" id=e137ffaf-8da1-4816-8353-eb9a4b9e351e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.098262250Z" level=info msg="Removing pod sandbox: 812273bd0d31aeb266c44d841484c682c64587e35af8fdc471f26dd9c0e41d51" id=dcd91241-8000-4791-bd94-7b36960b9ebf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.103806546Z" level=info msg="Removed pod sandbox: 812273bd0d31aeb266c44d841484c682c64587e35af8fdc471f26dd9c0e41d51" id=dcd91241-8000-4791-bd94-7b36960b9ebf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.104268148Z" level=info msg="Stopping pod sandbox: 2091b42f99362d9f6c558ec2e64a245f5b910edd4a3a98a0160f7d06a590e5a0" id=a1900cfe-6737-4aec-b953-140889338935 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.104302952Z" level=info msg="Stopped pod sandbox (already stopped): 2091b42f99362d9f6c558ec2e64a245f5b910edd4a3a98a0160f7d06a590e5a0" id=a1900cfe-6737-4aec-b953-140889338935 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.104620952Z" level=info msg="Removing pod sandbox: 2091b42f99362d9f6c558ec2e64a245f5b910edd4a3a98a0160f7d06a590e5a0" id=3b226faa-8635-4db1-a7e2-26b9eae205f5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.111441078Z" level=info msg="Removed pod sandbox: 2091b42f99362d9f6c558ec2e64a245f5b910edd4a3a98a0160f7d06a590e5a0" id=3b226faa-8635-4db1-a7e2-26b9eae205f5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.111851863Z" level=info msg="Stopping pod sandbox: d59bf064c838940964591e946cf21cd2456ffd00aed49c878d8d24222e5543eb" id=83e41506-bd3b-484e-97b0-205f450d1faf name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.111882632Z" level=info msg="Stopped pod sandbox (already stopped): d59bf064c838940964591e946cf21cd2456ffd00aed49c878d8d24222e5543eb" id=83e41506-bd3b-484e-97b0-205f450d1faf name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.112137279Z" level=info msg="Removing pod sandbox: d59bf064c838940964591e946cf21cd2456ffd00aed49c878d8d24222e5543eb" id=8eef0875-2c83-4ea0-9b94-7b4981ecd6ec name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 16:41:42 addons-739733 crio[1053]: time="2025-09-08 16:41:42.120359796Z" level=info msg="Removed pod sandbox: d59bf064c838940964591e946cf21cd2456ffd00aed49c878d8d24222e5543eb" id=8eef0875-2c83-4ea0-9b94-7b4981ecd6ec name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 16:42:53 addons-739733 crio[1053]: time="2025-09-08 16:42:53.710694225Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-h6rjt/POD" id=30f3c921-8ad3-4bdc-8716-b27d284aa115 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 16:42:53 addons-739733 crio[1053]: time="2025-09-08 16:42:53.710761447Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 16:42:53 addons-739733 crio[1053]: time="2025-09-08 16:42:53.729440385Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-h6rjt Namespace:default ID:fd96b7cb63c867d7479c3804d8c1d3a970c7e82d29ed4996b475c6b3a70a2caf UID:5c5d8413-6ae0-49b8-93d5-ede3601486c1 NetNS:/var/run/netns/e43f3507-c0f1-4b13-adc2-82831263b63b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 16:42:53 addons-739733 crio[1053]: time="2025-09-08 16:42:53.729472157Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-h6rjt to CNI network \"kindnet\" (type=ptp)"
	Sep 08 16:42:53 addons-739733 crio[1053]: time="2025-09-08 16:42:53.738222170Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-h6rjt Namespace:default ID:fd96b7cb63c867d7479c3804d8c1d3a970c7e82d29ed4996b475c6b3a70a2caf UID:5c5d8413-6ae0-49b8-93d5-ede3601486c1 NetNS:/var/run/netns/e43f3507-c0f1-4b13-adc2-82831263b63b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 16:42:53 addons-739733 crio[1053]: time="2025-09-08 16:42:53.738343071Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-h6rjt for CNI network kindnet (type=ptp)"
	Sep 08 16:42:53 addons-739733 crio[1053]: time="2025-09-08 16:42:53.740617076Z" level=info msg="Ran pod sandbox fd96b7cb63c867d7479c3804d8c1d3a970c7e82d29ed4996b475c6b3a70a2caf with infra container: default/hello-world-app-5d498dc89-h6rjt/POD" id=30f3c921-8ad3-4bdc-8716-b27d284aa115 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 16:42:53 addons-739733 crio[1053]: time="2025-09-08 16:42:53.741675657Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=6ddc5673-54ff-4d85-9b8d-d3fe19a38ea1 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 16:42:53 addons-739733 crio[1053]: time="2025-09-08 16:42:53.741868787Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=6ddc5673-54ff-4d85-9b8d-d3fe19a38ea1 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 16:42:53 addons-739733 crio[1053]: time="2025-09-08 16:42:53.742436402Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=56ff9b18-1617-47db-95b9-9b7197524bda name=/runtime.v1.ImageService/PullImage
	Sep 08 16:42:53 addons-739733 crio[1053]: time="2025-09-08 16:42:53.748144555Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 08 16:42:54 addons-739733 crio[1053]: time="2025-09-08 16:42:54.765884324Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b3c7950f49025       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   f00d2b039b0f8       nginx
	f335d9315e91c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   cc2b6ab23eea3       busybox
	837553b5eda30       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   818bbfbe2b0e2       ingress-nginx-controller-9cc49f96f-fcxgl
	27e9f6e1a1dbc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            3 minutes ago       Running             gadget                    0                   73fd616078035       gadget-q4xzf
	ee425b02b6ee2       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             3 minutes ago       Exited              patch                     1                   5bd89e3636cca       ingress-nginx-admission-patch-4rjct
	1b33e718369ad       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago       Exited              create                    0                   74f8b7d6909b9       ingress-nginx-admission-create-pvbnl
	d16a6744bb496       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   501dc9c76d02e       kube-ingress-dns-minikube
	72b2ec037e93d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   1b4ee24bbaae2       coredns-66bc5c9577-tb4lv
	5c3707b4c745b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   91c9446d2714e       storage-provisioner
	74456788600d3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                             5 minutes ago       Running             kindnet-cni               0                   6345bd99bc78a       kindnet-k4fpd
	c9b4b4fe957fc       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             5 minutes ago       Running             kube-proxy                0                   cc74e26253aa3       kube-proxy-574p4
	c149f4d9fd5a9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago       Running             kube-apiserver            0                   e9ddb682bee00       kube-apiserver-addons-739733
	06eb2c057807e       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago       Running             kube-scheduler            0                   94b136ec39d3a       kube-scheduler-addons-739733
	16b691c331765       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago       Running             kube-controller-manager   0                   5d93ae2785f27       kube-controller-manager-addons-739733
	a39d5dd5f589b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   a47bfa6c46397       etcd-addons-739733
	
	
	==> coredns [72b2ec037e93d5c303b76b0f9cb1a3d7c3bb02231605b2337976dd56e02d813e] <==
	[INFO] 10.244.0.19:38520 - 25856 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005459688s
	[INFO] 10.244.0.19:54760 - 34617 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004848687s
	[INFO] 10.244.0.19:54760 - 34362 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004901399s
	[INFO] 10.244.0.19:37218 - 38699 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003935804s
	[INFO] 10.244.0.19:37218 - 38397 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004113819s
	[INFO] 10.244.0.19:36271 - 731 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158567s
	[INFO] 10.244.0.19:36271 - 504 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000197683s
	[INFO] 10.244.0.22:49448 - 29841 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000180921s
	[INFO] 10.244.0.22:57926 - 53110 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000240319s
	[INFO] 10.244.0.22:41158 - 55605 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133037s
	[INFO] 10.244.0.22:54750 - 37496 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000173909s
	[INFO] 10.244.0.22:45230 - 23541 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010752s
	[INFO] 10.244.0.22:56320 - 59141 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128781s
	[INFO] 10.244.0.22:56117 - 35270 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003426798s
	[INFO] 10.244.0.22:55573 - 43909 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003606332s
	[INFO] 10.244.0.22:48247 - 17233 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.004226971s
	[INFO] 10.244.0.22:40008 - 2596 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005139166s
	[INFO] 10.244.0.22:39588 - 20408 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004096882s
	[INFO] 10.244.0.22:50188 - 26706 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004350749s
	[INFO] 10.244.0.22:52584 - 53180 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004217726s
	[INFO] 10.244.0.22:51048 - 47697 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004258404s
	[INFO] 10.244.0.22:54749 - 35234 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001974465s
	[INFO] 10.244.0.22:48445 - 41959 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002128485s
	[INFO] 10.244.0.26:47356 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000257666s
	[INFO] 10.244.0.26:50231 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149218s
	
	
	==> describe nodes <==
	Name:               addons-739733
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-739733
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6
	                    minikube.k8s.io/name=addons-739733
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T16_37_42_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-739733
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 16:37:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-739733
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 16:42:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 16:41:16 +0000   Mon, 08 Sep 2025 16:37:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 16:41:16 +0000   Mon, 08 Sep 2025 16:37:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 16:41:16 +0000   Mon, 08 Sep 2025 16:37:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 16:41:16 +0000   Mon, 08 Sep 2025 16:38:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-739733
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 22f079a79afa42fcabed9ef7a3453036
	  System UUID:                b8952d84-e918-4bf1-893e-a34c32b19204
	  Boot ID:                    b484f3f8-b9f0-49fd-b361-646a5559e856
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  default                     hello-world-app-5d498dc89-h6rjt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-q4xzf                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-fcxgl    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         5m3s
	  kube-system                 coredns-66bc5c9577-tb4lv                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m8s
	  kube-system                 etcd-addons-739733                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m13s
	  kube-system                 kindnet-k4fpd                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m8s
	  kube-system                 kube-apiserver-addons-739733                250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-controller-manager-addons-739733       200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-proxy-574p4                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-scheduler-addons-739733                100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m2s   kube-proxy       
	  Normal   Starting                 5m14s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m14s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m14s  kubelet          Node addons-739733 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m14s  kubelet          Node addons-739733 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m14s  kubelet          Node addons-739733 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m9s   node-controller  Node addons-739733 event: Registered Node addons-739733 in Controller
	  Normal   NodeReady                4m24s  kubelet          Node addons-739733 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000741] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000620] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000947] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000649] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000618] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001023] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.605815] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021322] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[ +10.132527] kauditd_printk_skb: 46 callbacks suppressed
	[Sep 8 16:40] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[  +1.013561] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[  +2.019851] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[  +4.059716] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[  +8.195413] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[Sep 8 16:41] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[ +34.049604] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	
	
	==> etcd [a39d5dd5f589b3b41c3316df9456b24c932fdb7a8cdc679d62f23a8db9adf1c5] <==
	{"level":"warn","ts":"2025-09-08T16:37:50.790924Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.651726ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-739733\" limit:1 ","response":"range_response_count:1 size:5565"}
	{"level":"info","ts":"2025-09-08T16:37:50.790941Z","caller":"traceutil/trace.go:172","msg":"trace[2083807843] range","detail":"{range_begin:/registry/minions/addons-739733; range_end:; response_count:1; response_revision:402; }","duration":"108.669351ms","start":"2025-09-08T16:37:50.682267Z","end":"2025-09-08T16:37:50.790937Z","steps":["trace[2083807843] 'agreement among raft nodes before linearized reading'  (duration: 108.603314ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:37:50.791037Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.905809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-09-08T16:37:50.791049Z","caller":"traceutil/trace.go:172","msg":"trace[299317180] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:402; }","duration":"108.918722ms","start":"2025-09-08T16:37:50.682126Z","end":"2025-09-08T16:37:50.791045Z","steps":["trace[299317180] 'agreement among raft nodes before linearized reading'  (duration: 108.876407ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:37:50.791145Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.381368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-574p4\" limit:1 ","response":"range_response_count:1 size:5036"}
	{"level":"info","ts":"2025-09-08T16:37:50.791156Z","caller":"traceutil/trace.go:172","msg":"trace[1805099840] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-574p4; range_end:; response_count:1; response_revision:402; }","duration":"110.395264ms","start":"2025-09-08T16:37:50.680758Z","end":"2025-09-08T16:37:50.791153Z","steps":["trace[1805099840] 'agreement among raft nodes before linearized reading'  (duration: 110.342616ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:37:54.578555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:37:54.594442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:38:16.114562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:38:16.120898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:38:16.143418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48970","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T16:39:24.479461Z","caller":"traceutil/trace.go:172","msg":"trace[795099193] transaction","detail":"{read_only:false; response_revision:1142; number_of_response:1; }","duration":"117.017708ms","start":"2025-09-08T16:39:24.362414Z","end":"2025-09-08T16:39:24.479432Z","steps":["trace[795099193] 'process raft request'  (duration: 49.588286ms)","trace[795099193] 'compare'  (duration: 67.192233ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T16:39:51.710831Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.885802ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T16:39:51.710895Z","caller":"traceutil/trace.go:172","msg":"trace[1791183616] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1235; }","duration":"100.967257ms","start":"2025-09-08T16:39:51.609917Z","end":"2025-09-08T16:39:51.710885Z","steps":["trace[1791183616] 'range keys from in-memory index tree'  (duration: 100.804946ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:40:24.023156Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.130609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/headlamp\" limit:1 ","response":"range_response_count:1 size:581"}
	{"level":"info","ts":"2025-09-08T16:40:24.023233Z","caller":"traceutil/trace.go:172","msg":"trace[847628333] range","detail":"{range_begin:/registry/namespaces/headlamp; range_end:; response_count:1; response_revision:1409; }","duration":"111.211141ms","start":"2025-09-08T16:40:23.912001Z","end":"2025-09-08T16:40:24.023212Z","steps":["trace[847628333] 'agreement among raft nodes before linearized reading'  (duration: 55.232814ms)","trace[847628333] 'range keys from in-memory index tree'  (duration: 55.871059ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T16:40:24.023732Z","caller":"traceutil/trace.go:172","msg":"trace[152684943] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"113.040827ms","start":"2025-09-08T16:40:23.910675Z","end":"2025-09-08T16:40:24.023716Z","steps":["trace[152684943] 'process raft request'  (duration: 56.581256ms)","trace[152684943] 'compare'  (duration: 56.161243ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T16:40:24.023748Z","caller":"traceutil/trace.go:172","msg":"trace[1880300054] transaction","detail":"{read_only:false; response_revision:1411; number_of_response:1; }","duration":"113.029042ms","start":"2025-09-08T16:40:23.910700Z","end":"2025-09-08T16:40:24.023729Z","steps":["trace[1880300054] 'process raft request'  (duration: 112.86703ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:40:24.268864Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.184516ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128039833981059539 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/headlamp/headlamp\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/headlamp/headlamp\" value_size:1479 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-09-08T16:40:24.269050Z","caller":"traceutil/trace.go:172","msg":"trace[1475618963] transaction","detail":"{read_only:false; response_revision:1414; number_of_response:1; }","duration":"172.420099ms","start":"2025-09-08T16:40:24.096608Z","end":"2025-09-08T16:40:24.269028Z","steps":["trace[1475618963] 'process raft request'  (duration: 56.007043ms)","trace[1475618963] 'compare'  (duration: 115.956085ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T16:40:24.457500Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.018372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T16:40:24.457572Z","caller":"traceutil/trace.go:172","msg":"trace[676705231] transaction","detail":"{read_only:false; response_revision:1416; number_of_response:1; }","duration":"184.946434ms","start":"2025-09-08T16:40:24.272605Z","end":"2025-09-08T16:40:24.457552Z","steps":["trace[676705231] 'process raft request'  (duration: 135.372381ms)","trace[676705231] 'compare'  (duration: 49.42939ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T16:40:24.457603Z","caller":"traceutil/trace.go:172","msg":"trace[727711146] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1415; }","duration":"182.123981ms","start":"2025-09-08T16:40:24.275460Z","end":"2025-09-08T16:40:24.457584Z","steps":["trace[727711146] 'agreement among raft nodes before linearized reading'  (duration: 132.532326ms)","trace[727711146] 'range keys from in-memory index tree'  (duration: 49.455969ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T16:40:24.457706Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.101266ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/headlamp\" limit:1 ","response":"range_response_count:1 size:581"}
	{"level":"info","ts":"2025-09-08T16:40:24.457755Z","caller":"traceutil/trace.go:172","msg":"trace[1143859665] range","detail":"{range_begin:/registry/namespaces/headlamp; range_end:; response_count:1; response_revision:1416; }","duration":"123.156475ms","start":"2025-09-08T16:40:24.334586Z","end":"2025-09-08T16:40:24.457742Z","steps":["trace[1143859665] 'agreement among raft nodes before linearized reading'  (duration: 122.962784ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:42:55 up 25 min,  0 users,  load average: 0.75, 0.75, 0.37
	Linux addons-739733 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [74456788600d348d379c049656ccaa88adb1efb32bb6589bc12e763ce8eb2114] <==
	I0908 16:40:51.264505       1 main.go:301] handling current node
	I0908 16:41:01.264804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:41:01.264835       1 main.go:301] handling current node
	I0908 16:41:11.264623       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:41:11.264653       1 main.go:301] handling current node
	I0908 16:41:21.264152       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:41:21.264206       1 main.go:301] handling current node
	I0908 16:41:31.264569       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:41:31.264609       1 main.go:301] handling current node
	I0908 16:41:41.264852       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:41:41.264891       1 main.go:301] handling current node
	I0908 16:41:51.264704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:41:51.264758       1 main.go:301] handling current node
	I0908 16:42:01.265036       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:42:01.265066       1 main.go:301] handling current node
	I0908 16:42:11.264163       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:42:11.264195       1 main.go:301] handling current node
	I0908 16:42:21.265007       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:42:21.265038       1 main.go:301] handling current node
	I0908 16:42:31.264157       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:42:31.264213       1 main.go:301] handling current node
	I0908 16:42:41.264884       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:42:41.264919       1 main.go:301] handling current node
	I0908 16:42:51.264853       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:42:51.264885       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c149f4d9fd5a92a8ecdfe4bb32a937e84d3147af965fd6b2eaecb56892f798e3] <==
	I0908 16:40:13.253042       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:40:24.269431       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.247.161"}
	I0908 16:40:29.562242       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0908 16:40:29.899985       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.96.223"}
	I0908 16:40:53.357196       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0908 16:40:54.481742       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 16:40:55.538722       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0908 16:41:10.030709       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0908 16:41:13.217822       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0908 16:41:17.373828       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 16:41:17.373967       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 16:41:17.389446       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 16:41:17.389594       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 16:41:17.389626       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 16:41:17.402324       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 16:41:17.402496       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 16:41:17.470206       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 16:41:17.470245       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0908 16:41:18.390386       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0908 16:41:18.470630       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0908 16:41:18.479047       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0908 16:41:37.499806       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:42:05.797696       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:42:48.124397       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:42:53.510372       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.61.1"}
	
	
	==> kube-controller-manager [16b691c331765145662f2de458b3e33cc13133e6a4d00111b9ae76a6f10d080d] <==
	E0908 16:41:28.145983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:41:35.860936       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:41:35.861975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:41:38.724352       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:41:38.725205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:41:40.181239       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:41:40.182075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0908 16:41:46.292238       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0908 16:41:46.292270       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 16:41:46.292403       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0908 16:41:46.292444       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0908 16:41:54.887235       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:41:54.888244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:01.711610       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:01.712581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:03.908725       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:03.909709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:21.260978       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:21.261919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:30.743971       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:30.744800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:47.129209       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:47.130165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:51.840821       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:51.841879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [c9b4b4fe957fc6b25f78d8f6c6d21a06d5c6bc7fe31c86543ed652c5f051f4d2] <==
	I0908 16:37:51.466737       1 server_linux.go:53] "Using iptables proxy"
	I0908 16:37:51.865427       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 16:37:51.966287       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 16:37:51.966649       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 16:37:51.966775       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 16:37:52.170442       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 16:37:52.170591       1 server_linux.go:132] "Using iptables Proxier"
	I0908 16:37:52.269008       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 16:37:52.275124       1 server.go:527] "Version info" version="v1.34.0"
	I0908 16:37:52.275406       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 16:37:52.278657       1 config.go:200] "Starting service config controller"
	I0908 16:37:52.278723       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 16:37:52.278768       1 config.go:106] "Starting endpoint slice config controller"
	I0908 16:37:52.278775       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 16:37:52.278788       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 16:37:52.278793       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 16:37:52.289282       1 config.go:309] "Starting node config controller"
	I0908 16:37:52.289306       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 16:37:52.289317       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 16:37:52.379293       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 16:37:52.379343       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 16:37:52.379388       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [06eb2c057807ef9cfaf335da23861ea8b3dbf49cf31bb50fdb411110d7cab2d4] <==
	E0908 16:37:39.185384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 16:37:39.185469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0908 16:37:39.183207       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0908 16:37:39.187273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0908 16:37:39.187311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 16:37:39.187324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 16:37:39.187344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 16:37:39.187379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 16:37:39.187404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 16:37:39.187429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 16:37:39.187475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 16:37:39.262518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 16:37:39.266579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 16:37:39.266820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 16:37:39.266945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 16:37:39.267040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 16:37:39.267216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 16:37:39.267221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 16:37:40.191800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 16:37:40.228749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 16:37:40.285496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 16:37:40.305900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 16:37:40.327194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 16:37:40.362357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I0908 16:37:40.862917       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.663470    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/336d41c8dbd401c2db1263abaf8e65fa5b0618d3b02b4e0781a58395753548f0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/336d41c8dbd401c2db1263abaf8e65fa5b0618d3b02b4e0781a58395753548f0/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.663495    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b27ee84e92b2b10c58dd824d25674aa3968f1cca5fed62fe27d1176271ec4364/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b27ee84e92b2b10c58dd824d25674aa3968f1cca5fed62fe27d1176271ec4364/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.663542    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b6f21ee9e9a085c56a9fc527318e3819bb8f5547a0be54614b8c14582a5aae45/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b6f21ee9e9a085c56a9fc527318e3819bb8f5547a0be54614b8c14582a5aae45/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.663592    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f4a142f02a2cdc796606a7eb200414e8ee46f6a99fd7491734707220d9d84b37/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f4a142f02a2cdc796606a7eb200414e8ee46f6a99fd7491734707220d9d84b37/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.663620    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/70b7b4d4bccc6eef25fded26688aa8927ed8c1d1993320c5e0d0abd234ab3585/diff" to get inode usage: stat /var/lib/containers/storage/overlay/70b7b4d4bccc6eef25fded26688aa8927ed8c1d1993320c5e0d0abd234ab3585/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.663635    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7beeabaf131ddf172a8ca93c8d50a511941a90fc397cdc8bac3653c3adbe25ef/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7beeabaf131ddf172a8ca93c8d50a511941a90fc397cdc8bac3653c3adbe25ef/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.663646    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a6a63a7ba1c72607046a8c23fd04b99c9a0e3d0bbf4a3e2ce46af12721bd14f5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a6a63a7ba1c72607046a8c23fd04b99c9a0e3d0bbf4a3e2ce46af12721bd14f5/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.663862    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f4a142f02a2cdc796606a7eb200414e8ee46f6a99fd7491734707220d9d84b37/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f4a142f02a2cdc796606a7eb200414e8ee46f6a99fd7491734707220d9d84b37/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.664583    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/72716ea59f91948160fa47b053e1317ccb1871a558e5c47725ef225d07941822/diff" to get inode usage: stat /var/lib/containers/storage/overlay/72716ea59f91948160fa47b053e1317ccb1871a558e5c47725ef225d07941822/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.664615    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b6f21ee9e9a085c56a9fc527318e3819bb8f5547a0be54614b8c14582a5aae45/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b6f21ee9e9a085c56a9fc527318e3819bb8f5547a0be54614b8c14582a5aae45/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.664633    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9673ab253888adf21dc7827b921e332ac74a9e5cee7fdcf1aa2348d0aa594f4e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9673ab253888adf21dc7827b921e332ac74a9e5cee7fdcf1aa2348d0aa594f4e/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.664656    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a6a63a7ba1c72607046a8c23fd04b99c9a0e3d0bbf4a3e2ce46af12721bd14f5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a6a63a7ba1c72607046a8c23fd04b99c9a0e3d0bbf4a3e2ce46af12721bd14f5/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.664660    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/70b7b4d4bccc6eef25fded26688aa8927ed8c1d1993320c5e0d0abd234ab3585/diff" to get inode usage: stat /var/lib/containers/storage/overlay/70b7b4d4bccc6eef25fded26688aa8927ed8c1d1993320c5e0d0abd234ab3585/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.664664    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7beeabaf131ddf172a8ca93c8d50a511941a90fc397cdc8bac3653c3adbe25ef/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7beeabaf131ddf172a8ca93c8d50a511941a90fc397cdc8bac3653c3adbe25ef/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.664691    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/72716ea59f91948160fa47b053e1317ccb1871a558e5c47725ef225d07941822/diff" to get inode usage: stat /var/lib/containers/storage/overlay/72716ea59f91948160fa47b053e1317ccb1871a558e5c47725ef225d07941822/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.664701    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b27ee84e92b2b10c58dd824d25674aa3968f1cca5fed62fe27d1176271ec4364/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b27ee84e92b2b10c58dd824d25674aa3968f1cca5fed62fe27d1176271ec4364/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.664717    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d71fa65ab8d4de04edafb26ee0ecd0b4f1b564d004abaa4ab80a72bed69fb444/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d71fa65ab8d4de04edafb26ee0ecd0b4f1b564d004abaa4ab80a72bed69fb444/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.674419    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/75d7fd4a141a05f135786e3d7ed445d49c72b9ae8fd91269a9fec14d82efd1d8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/75d7fd4a141a05f135786e3d7ed445d49c72b9ae8fd91269a9fec14d82efd1d8/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.687965    1671 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/75d7fd4a141a05f135786e3d7ed445d49c72b9ae8fd91269a9fec14d82efd1d8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/75d7fd4a141a05f135786e3d7ed445d49c72b9ae8fd91269a9fec14d82efd1d8/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.793417    1671 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349761793088372  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 16:42:41 addons-739733 kubelet[1671]: E0908 16:42:41.793453    1671 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349761793088372  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 16:42:51 addons-739733 kubelet[1671]: E0908 16:42:51.796702    1671 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349771796322399  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 16:42:51 addons-739733 kubelet[1671]: E0908 16:42:51.796740    1671 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349771796322399  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 08 16:42:53 addons-739733 kubelet[1671]: I0908 16:42:53.501592    1671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwlr\" (UniqueName: \"kubernetes.io/projected/5c5d8413-6ae0-49b8-93d5-ede3601486c1-kube-api-access-ljwlr\") pod \"hello-world-app-5d498dc89-h6rjt\" (UID: \"5c5d8413-6ae0-49b8-93d5-ede3601486c1\") " pod="default/hello-world-app-5d498dc89-h6rjt"
	Sep 08 16:42:53 addons-739733 kubelet[1671]: W0908 16:42:53.739900    1671 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/40d0ff34d84ef14715ac2dfcaa317a06a4646b0400347e9ced9b9082a13505e3/crio-fd96b7cb63c867d7479c3804d8c1d3a970c7e82d29ed4996b475c6b3a70a2caf WatchSource:0}: Error finding container fd96b7cb63c867d7479c3804d8c1d3a970c7e82d29ed4996b475c6b3a70a2caf: Status 404 returned error can't find the container with id fd96b7cb63c867d7479c3804d8c1d3a970c7e82d29ed4996b475c6b3a70a2caf
	
	
	==> storage-provisioner [5c3707b4c745bd2383ec9e1cd5c4298cd66ed47fa1c81cc02ee25b6117f672e1] <==
	W0908 16:42:30.400997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:32.403447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:32.408908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:34.412417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:34.416148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:36.418657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:36.423525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:38.426539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:38.430136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:40.433036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:40.437787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:42.440459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:42.444198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:44.447593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:44.452486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:46.455436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:46.459333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:48.462223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:48.466174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:50.468886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:50.473067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:52.475791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:52.480644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:54.483347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:42:54.487146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-739733 -n addons-739733
helpers_test.go:269: (dbg) Run:  kubectl --context addons-739733 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-h6rjt ingress-nginx-admission-create-pvbnl ingress-nginx-admission-patch-4rjct
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-739733 describe pod hello-world-app-5d498dc89-h6rjt ingress-nginx-admission-create-pvbnl ingress-nginx-admission-patch-4rjct
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-739733 describe pod hello-world-app-5d498dc89-h6rjt ingress-nginx-admission-create-pvbnl ingress-nginx-admission-patch-4rjct: exit status 1 (68.390589ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-h6rjt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-739733/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 16:42:53 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ljwlr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ljwlr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-h6rjt to addons-739733
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pvbnl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4rjct" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-739733 describe pod hello-world-app-5d498dc89-h6rjt ingress-nginx-admission-create-pvbnl ingress-nginx-admission-patch-4rjct: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-739733 addons disable ingress --alsologtostderr -v=1: (7.762448711s)
--- FAIL: TestAddons/parallel/Ingress (155.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-849003 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-849003 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-gtttx" [4f7498ad-2577-451c-9037-cc5cd38df3c8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
2025/09/08 16:46:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-849003 -n functional-849003
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-08 16:56:39.375192164 +0000 UTC m=+1207.299826180
functional_test.go:1645: (dbg) Run:  kubectl --context functional-849003 describe po hello-node-connect-7d85dfc575-gtttx -n default
functional_test.go:1645: (dbg) kubectl --context functional-849003 describe po hello-node-connect-7d85dfc575-gtttx -n default:
Name:             hello-node-connect-7d85dfc575-gtttx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-849003/192.168.49.2
Start Time:       Mon, 08 Sep 2025 16:46:38 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z4vrk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-z4vrk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-gtttx to functional-849003
Normal   Pulling    6m52s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 9m50s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m52s (x5 over 9m50s)   kubelet            Error: ErrImagePull
Warning  Failed     4m46s (x20 over 9m49s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m34s (x21 over 9m49s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-849003 logs hello-node-connect-7d85dfc575-gtttx -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-849003 logs hello-node-connect-7d85dfc575-gtttx -n default: exit status 1 (62.523931ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-gtttx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-849003 logs hello-node-connect-7d85dfc575-gtttx -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-849003 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-gtttx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-849003/192.168.49.2
Start Time:       Mon, 08 Sep 2025 16:46:38 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z4vrk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-z4vrk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-gtttx to functional-849003
Normal   Pulling    6m52s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 9m50s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m52s (x5 over 9m50s)   kubelet            Error: ErrImagePull
Warning  Failed     4m46s (x20 over 9m49s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m34s (x21 over 9m49s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-849003 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-849003 logs -l app=hello-node-connect: exit status 1 (61.249499ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-gtttx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-849003 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-849003 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.23.174
IPs:                      10.100.23.174
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32671/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-849003
helpers_test.go:243: (dbg) docker inspect functional-849003:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73",
	        "Created": "2025-09-08T16:44:00.968170563Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 37418,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T16:44:01.003725579Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73/hosts",
	        "LogPath": "/var/lib/docker/containers/c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73/c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73-json.log",
	        "Name": "/functional-849003",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-849003:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-849003",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73",
	                "LowerDir": "/var/lib/docker/overlay2/3da545634c12ccf7ed57ffa295bff657c918c8b600750b1853472fe27669354c-init/diff:/var/lib/docker/overlay2/e8e8fc7fb28a55bf413358d36a5c2b32c680c35a010c40a038aea7770a9d1ab7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3da545634c12ccf7ed57ffa295bff657c918c8b600750b1853472fe27669354c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3da545634c12ccf7ed57ffa295bff657c918c8b600750b1853472fe27669354c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3da545634c12ccf7ed57ffa295bff657c918c8b600750b1853472fe27669354c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-849003",
	                "Source": "/var/lib/docker/volumes/functional-849003/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-849003",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-849003",
	                "name.minikube.sigs.k8s.io": "functional-849003",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "81c7983cf2d2c059162ed9fee4f10f29e0a477d1e626b882a6eb25daebd4c633",
	            "SandboxKey": "/var/run/docker/netns/81c7983cf2d2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-849003": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:38:8c:fb:bb:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7d8fbf4a8b95ba5a755594ebf4d3c54e2bf15d88a53ac1347c03957012ca12a2",
	                    "EndpointID": "c148a522d9203953b7515e2fd75bb243159645635dc575eb50c6d945d5cee92f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-849003",
	                        "c2e4676ee659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-849003 -n functional-849003
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-849003 logs -n 25: (1.426891142s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-849003 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:46 UTC │ 08 Sep 25 16:46 UTC │
	│ ssh            │ functional-849003 ssh sudo cat /etc/ssl/certs/111412.pem                                               │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:46 UTC │ 08 Sep 25 16:46 UTC │
	│ ssh            │ functional-849003 ssh sudo cat /usr/share/ca-certificates/111412.pem                                   │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:46 UTC │ 08 Sep 25 16:46 UTC │
	│ ssh            │ functional-849003 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:46 UTC │ 08 Sep 25 16:46 UTC │
	│ ssh            │ functional-849003 ssh sudo cat /etc/test/nested/copy/11141/hosts                                       │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:46 UTC │ 08 Sep 25 16:46 UTC │
	│ ssh            │ functional-849003 ssh echo hello                                                                       │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │ 08 Sep 25 16:47 UTC │
	│ ssh            │ functional-849003 ssh cat /etc/hostname                                                                │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │ 08 Sep 25 16:47 UTC │
	│ tunnel         │ functional-849003 tunnel --alsologtostderr                                                             │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │                     │
	│ tunnel         │ functional-849003 tunnel --alsologtostderr                                                             │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │                     │
	│ tunnel         │ functional-849003 tunnel --alsologtostderr                                                             │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │                     │
	│ image          │ functional-849003 image ls --format short --alsologtostderr                                            │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │ 08 Sep 25 16:47 UTC │
	│ image          │ functional-849003 image ls --format json --alsologtostderr                                             │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │ 08 Sep 25 16:47 UTC │
	│ image          │ functional-849003 image ls --format table --alsologtostderr                                            │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │ 08 Sep 25 16:47 UTC │
	│ image          │ functional-849003 image ls --format yaml --alsologtostderr                                             │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │ 08 Sep 25 16:47 UTC │
	│ ssh            │ functional-849003 ssh pgrep buildkitd                                                                  │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │                     │
	│ image          │ functional-849003 image build -t localhost/my-image:functional-849003 testdata/build --alsologtostderr │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │ 08 Sep 25 16:47 UTC │
	│ image          │ functional-849003 image ls                                                                             │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │ 08 Sep 25 16:47 UTC │
	│ update-context │ functional-849003 update-context --alsologtostderr -v=2                                                │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │ 08 Sep 25 16:47 UTC │
	│ update-context │ functional-849003 update-context --alsologtostderr -v=2                                                │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │ 08 Sep 25 16:47 UTC │
	│ update-context │ functional-849003 update-context --alsologtostderr -v=2                                                │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:47 UTC │ 08 Sep 25 16:47 UTC │
	│ service        │ functional-849003 service list                                                                         │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:56 UTC │ 08 Sep 25 16:56 UTC │
	│ service        │ functional-849003 service list -o json                                                                 │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:56 UTC │ 08 Sep 25 16:56 UTC │
	│ service        │ functional-849003 service --namespace=default --https --url hello-node                                 │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:56 UTC │                     │
	│ service        │ functional-849003 service hello-node --url --format={{.IP}}                                            │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:56 UTC │                     │
	│ service        │ functional-849003 service hello-node --url                                                             │ functional-849003 │ jenkins │ v1.36.0 │ 08 Sep 25 16:56 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 16:46:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 16:46:29.989526   48763 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:46:29.989814   48763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:46:29.989826   48763 out.go:374] Setting ErrFile to fd 2...
	I0908 16:46:29.989831   48763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:46:29.990016   48763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
	I0908 16:46:29.990539   48763 out.go:368] Setting JSON to false
	I0908 16:46:29.991553   48763 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1734,"bootTime":1757348256,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 16:46:29.991669   48763 start.go:140] virtualization: kvm guest
	I0908 16:46:29.994208   48763 out.go:179] * [functional-849003] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 16:46:29.995387   48763 notify.go:220] Checking for updates...
	I0908 16:46:29.995406   48763 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 16:46:29.996718   48763 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 16:46:29.998046   48763 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	I0908 16:46:29.999238   48763 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	I0908 16:46:30.000565   48763 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 16:46:30.001952   48763 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 16:46:30.003689   48763 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:46:30.004222   48763 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 16:46:30.028979   48763 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 16:46:30.029075   48763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 16:46:30.078676   48763 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 16:46:30.069242945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 16:46:30.078772   48763 docker.go:318] overlay module found
	I0908 16:46:30.080831   48763 out.go:179] * Using the docker driver based on existing profile
	I0908 16:46:30.082772   48763 start.go:304] selected driver: docker
	I0908 16:46:30.082792   48763 start.go:918] validating driver "docker" against &{Name:functional-849003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-849003 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:46:30.082884   48763 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 16:46:30.082962   48763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 16:46:30.132949   48763 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 16:46:30.123735529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 16:46:30.134015   48763 cni.go:84] Creating CNI manager for ""
	I0908 16:46:30.134091   48763 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 16:46:30.134159   48763 start.go:348] cluster config:
	{Name:functional-849003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-849003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:46:30.136206   48763 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 08 16:47:16 functional-849003 crio[4934]: time="2025-09-08 16:47:16.912482996Z" level=info msg="Got pod network &{Name:nginx-svc Namespace:default ID:e2f534a137333078bb9594f4a51ba97d055d8fe516463f8fc461063f7d7c253f UID:f724f1d7-ad34-4e5d-a1d7-1bdf41729b65 NetNS:/var/run/netns/1d978053-672f-49ff-b27d-45f99eadab09 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 16:47:16 functional-849003 crio[4934]: time="2025-09-08 16:47:16.912617703Z" level=info msg="Checking pod default_nginx-svc for CNI network kindnet (type=ptp)"
	Sep 08 16:47:16 functional-849003 crio[4934]: time="2025-09-08 16:47:16.915153816Z" level=info msg="Ran pod sandbox e2f534a137333078bb9594f4a51ba97d055d8fe516463f8fc461063f7d7c253f with infra container: default/nginx-svc/POD" id=881a3039-3213-4b2e-928a-787114b532fb name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 16:47:16 functional-849003 crio[4934]: time="2025-09-08 16:47:16.916451811Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=640659e8-2bc1-4e76-8c89-44133fde3732 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 16:47:16 functional-849003 crio[4934]: time="2025-09-08 16:47:16.916686189Z" level=info msg="Image docker.io/nginx:alpine not found" id=640659e8-2bc1-4e76-8c89-44133fde3732 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 16:47:16 functional-849003 crio[4934]: time="2025-09-08 16:47:16.917208426Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=bdbf027e-93a4-47b1-b343-6f94c7395dce name=/runtime.v1.ImageService/PullImage
	Sep 08 16:47:16 functional-849003 crio[4934]: time="2025-09-08 16:47:16.918761698Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 08 16:47:17 functional-849003 crio[4934]: time="2025-09-08 16:47:17.955031372Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 08 16:47:21 functional-849003 crio[4934]: time="2025-09-08 16:47:21.252478003Z" level=info msg="Pulled image: docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8" id=bdbf027e-93a4-47b1-b343-6f94c7395dce name=/runtime.v1.ImageService/PullImage
	Sep 08 16:47:21 functional-849003 crio[4934]: time="2025-09-08 16:47:21.253268261Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=5ab2e45d-f3a8-49e5-a02c-48ed38a3ce9f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 16:47:21 functional-849003 crio[4934]: time="2025-09-08 16:47:21.255034764Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,RepoTags:[docker.io/library/nginx:alpine],RepoDigests:[docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8 docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a],Size_:53949946,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5ab2e45d-f3a8-49e5-a02c-48ed38a3ce9f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 16:47:21 functional-849003 crio[4934]: time="2025-09-08 16:47:21.262802492Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=646af6b9-31f9-4021-8497-2facf1945f41 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 16:47:21 functional-849003 crio[4934]: time="2025-09-08 16:47:21.264511896Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,RepoTags:[docker.io/library/nginx:alpine],RepoDigests:[docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8 docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a],Size_:53949946,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=646af6b9-31f9-4021-8497-2facf1945f41 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 16:47:21 functional-849003 crio[4934]: time="2025-09-08 16:47:21.267944738Z" level=info msg="Creating container: default/nginx-svc/nginx" id=37635b47-6512-4ffb-a3a2-d675bad72026 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 16:47:21 functional-849003 crio[4934]: time="2025-09-08 16:47:21.268042968Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 16:47:21 functional-849003 crio[4934]: time="2025-09-08 16:47:21.319527108Z" level=info msg="Created container 001a98b162fb3238e8e9a3c2fd1daf10c92083577dc232b51ff38c26ce85556c: default/nginx-svc/nginx" id=37635b47-6512-4ffb-a3a2-d675bad72026 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 16:47:21 functional-849003 crio[4934]: time="2025-09-08 16:47:21.320257018Z" level=info msg="Starting container: 001a98b162fb3238e8e9a3c2fd1daf10c92083577dc232b51ff38c26ce85556c" id=491f129b-e408-4aec-b1b6-ed6307800e74 name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 16:47:21 functional-849003 crio[4934]: time="2025-09-08 16:47:21.326299721Z" level=info msg="Started container" PID=9252 containerID=001a98b162fb3238e8e9a3c2fd1daf10c92083577dc232b51ff38c26ce85556c description=default/nginx-svc/nginx id=491f129b-e408-4aec-b1b6-ed6307800e74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e2f534a137333078bb9594f4a51ba97d055d8fe516463f8fc461063f7d7c253f
	Sep 08 16:47:34 functional-849003 crio[4934]: time="2025-09-08 16:47:34.973555022Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1f817448-5a24-4a70-bcf1-2c92196d5e0d name=/runtime.v1.ImageService/PullImage
	Sep 08 16:47:58 functional-849003 crio[4934]: time="2025-09-08 16:47:58.972593274Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2d519aac-2674-4036-9268-b9c6500cbb17 name=/runtime.v1.ImageService/PullImage
	Sep 08 16:48:15 functional-849003 crio[4934]: time="2025-09-08 16:48:15.973094119Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b14d057d-d3c7-4cdd-8e06-149d104242ee name=/runtime.v1.ImageService/PullImage
	Sep 08 16:49:25 functional-849003 crio[4934]: time="2025-09-08 16:49:25.973861436Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=13b0df7f-1b19-4aae-8d99-3e0925b0916c name=/runtime.v1.ImageService/PullImage
	Sep 08 16:49:47 functional-849003 crio[4934]: time="2025-09-08 16:49:47.972635704Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=109aa634-cde1-47a2-86f1-3bac8f3e6265 name=/runtime.v1.ImageService/PullImage
	Sep 08 16:52:12 functional-849003 crio[4934]: time="2025-09-08 16:52:12.973395712Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1458ed1e-6229-4609-9626-b06bde9d08c2 name=/runtime.v1.ImageService/PullImage
	Sep 08 16:52:31 functional-849003 crio[4934]: time="2025-09-08 16:52:31.973404601Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b3c276d6-a685-43e5-a8a3-a459d86dea64 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	001a98b162fb3       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                  9 minutes ago       Running             nginx                       0                   e2f534a137333       nginx-svc
	78c932079514d       docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57                  9 minutes ago       Running             myfrontend                  0                   e77b38f52516a       sp-pod
	2ff2d6e0e960f       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  9 minutes ago       Running             mysql                       0                   f7e4da7749f9f       mysql-5bb876957f-mmmpf
	5ad877127ba31       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   23adbc1fe39e4       dashboard-metrics-scraper-77bf4d6c4c-4rn7h
	2edbbbf1e6f5f       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         10 minutes ago      Running             kubernetes-dashboard        0                   10d7a5a135f63       kubernetes-dashboard-855c9754f9-466wp
	a9833e443407c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   d63c962672203       busybox-mount
	c06b270117631       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     2                   d5d9655d305f7       coredns-66bc5c9577-l96kh
	683bb0176c663       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 10 minutes ago      Running             kube-proxy                  2                   6e1e0463e8215       kube-proxy-f2zrs
	bcc0800b590d8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 2                   e13de5beb3ab5       kindnet-f6vlk
	67f4ae85f4ba9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   7cdd471f62975       storage-provisioner
	b8a90b9b774aa       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 10 minutes ago      Running             kube-apiserver              0                   b70c2f21fafbe       kube-apiserver-functional-849003
	01255ef632412       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        2                   33c76472f71b1       etcd-functional-849003
	c6957802d4d8c       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 10 minutes ago      Running             kube-controller-manager     2                   56b1edd2d5a3e       kube-controller-manager-functional-849003
	fd8e0e93784f4       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 10 minutes ago      Running             kube-scheduler              2                   f10d8c5a0f168       kube-scheduler-functional-849003
	0ab4c4a8590d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         2                   7cdd471f62975       storage-provisioner
	474e40286795d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 11 minutes ago      Exited              kube-scheduler              1                   f10d8c5a0f168       kube-scheduler-functional-849003
	cafc6fb133b09       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 1                   e13de5beb3ab5       kindnet-f6vlk
	3a682d0325be1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     1                   d5d9655d305f7       coredns-66bc5c9577-l96kh
	ee722de70a0c9       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 11 minutes ago      Exited              kube-controller-manager     1                   56b1edd2d5a3e       kube-controller-manager-functional-849003
	e5b696b0314ef       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 11 minutes ago      Exited              kube-proxy                  1                   6e1e0463e8215       kube-proxy-f2zrs
	5623c84145b91       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        1                   33c76472f71b1       etcd-functional-849003
	
	
	==> coredns [3a682d0325be147f9ea93af0c670a6cc930471aaa65bacb69c3a0360a3e94f63] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51643 - 53217 "HINFO IN 8461127771017658655.1322266181736575607. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.095308989s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c06b270117631a73bfe37084349a13a2ef68c8a700bbf6db87a714cda14a646c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55994 - 50621 "HINFO IN 7139258076839000536.6770123824325932968. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020623783s
	
	
	==> describe nodes <==
	Name:               functional-849003
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-849003
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6
	                    minikube.k8s.io/name=functional-849003
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T16_44_18_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 16:44:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-849003
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 16:56:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 16:55:06 +0000   Mon, 08 Sep 2025 16:44:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 16:55:06 +0000   Mon, 08 Sep 2025 16:44:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 16:55:06 +0000   Mon, 08 Sep 2025 16:44:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 16:55:06 +0000   Mon, 08 Sep 2025 16:45:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-849003
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 34e088e7c9934669aed7111e08e85147
	  System UUID:                a9665836-0141-48ea-9007-1707247ea163
	  Boot ID:                    b484f3f8-b9f0-49fd-b361-646a5559e856
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-ntvh9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-gtttx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-mmmpf                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m44s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m24s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  kube-system                 coredns-66bc5c9577-l96kh                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-849003                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-f6vlk                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-849003              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-849003     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-f2zrs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-849003              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4rn7h    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-466wp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-849003 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-849003 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-849003 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-849003 event: Registered Node functional-849003 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-849003 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-849003 event: Registered Node functional-849003 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-849003 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-849003 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-849003 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-849003 event: Registered Node functional-849003 in Controller
	
	
	==> dmesg <==
	[  +0.605815] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021322] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[ +10.132527] kauditd_printk_skb: 46 callbacks suppressed
	[Sep 8 16:40] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[  +1.013561] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[  +2.019851] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[  +4.059716] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[  +8.195413] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[Sep 8 16:41] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[ +34.049604] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: b6 09 d3 58 cb 42 92 34 17 ea 90 9a 08 00
	[Sep 8 16:46] FS-Cache: Duplicate cookie detected
	[  +0.004699] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006751] FS-Cache: O-cookie d=00000000271e5ae0{9P.session} n=00000000106ee60c
	[  +0.007546] FS-Cache: O-key=[10] '34323935333237363530'
	[  +0.005399] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006722] FS-Cache: N-cookie d=00000000271e5ae0{9P.session} n=000000006bfff402
	[  +0.008922] FS-Cache: N-key=[10] '34323935333237363530'
	[Sep 8 16:47] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [01255ef632412e1f6a929f1e148f953e899915b21bfebaac44a661f4d9d9bbeb] <==
	{"level":"warn","ts":"2025-09-08T16:46:04.094530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.100837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.106908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.113241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.119669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.168969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.176819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.201880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.209520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.222793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.229161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.235446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.242774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.249570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.256108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.267267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.273836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.280440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.313270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.328632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.335026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:46:04.375412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52588","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T16:56:03.487221Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1207}
	{"level":"info","ts":"2025-09-08T16:56:03.508157Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1207,"took":"20.582496ms","hash":3833476402,"current-db-size-bytes":3661824,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1667072,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-09-08T16:56:03.508215Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3833476402,"revision":1207,"compact-revision":-1}
	
	
	==> etcd [5623c84145b9101b82b8095ba2d1975ddd496e63e589f46460588d5012089341] <==
	{"level":"warn","ts":"2025-09-08T16:45:18.162456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:45:18.172906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:45:18.180666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:45:18.214753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:45:18.262114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:45:18.268680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T16:45:18.317438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57452","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T16:45:43.724026Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T16:45:43.724128Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-849003","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-08T16:45:43.724215Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T16:45:43.860472Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T16:45:43.860599Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T16:45:43.860683Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-08T16:45:43.860706Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T16:45:43.860746Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T16:45:43.860731Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-08T16:45:43.860764Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-08T16:45:43.860771Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-08T16:45:43.860772Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T16:45:43.860790Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-08T16:45:43.860756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T16:45:43.864001Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-08T16:45:43.864065Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T16:45:43.864092Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-08T16:45:43.864100Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-849003","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 16:56:40 up 39 min,  0 users,  load average: 0.08, 0.23, 0.35
	Linux functional-849003 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [bcc0800b590d84137cb01df2ab33e78407c390d1906c8189f663bced92daedbc] <==
	I0908 16:54:35.781936       1 main.go:301] handling current node
	I0908 16:54:45.778903       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:54:45.778939       1 main.go:301] handling current node
	I0908 16:54:55.783158       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:54:55.783198       1 main.go:301] handling current node
	I0908 16:55:05.777242       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:55:05.777768       1 main.go:301] handling current node
	I0908 16:55:15.774858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:55:15.774896       1 main.go:301] handling current node
	I0908 16:55:25.774719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:55:25.774754       1 main.go:301] handling current node
	I0908 16:55:35.777810       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:55:35.777848       1 main.go:301] handling current node
	I0908 16:55:45.773999       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:55:45.774039       1 main.go:301] handling current node
	I0908 16:55:55.774283       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:55:55.774345       1 main.go:301] handling current node
	I0908 16:56:05.775887       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:56:05.775925       1 main.go:301] handling current node
	I0908 16:56:15.779355       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:56:15.779399       1 main.go:301] handling current node
	I0908 16:56:25.774389       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:56:25.774431       1 main.go:301] handling current node
	I0908 16:56:35.781765       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:56:35.781801       1 main.go:301] handling current node
	
	
	==> kindnet [cafc6fb133b09f61bd46aaa3fb56e6c8552e5768839fec3ee659fa8c7dac0af3] <==
	I0908 16:45:16.364659       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 16:45:16.364913       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0908 16:45:16.365161       1 main.go:148] setting mtu 1500 for CNI 
	I0908 16:45:16.365181       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 16:45:16.365204       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T16:45:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 16:45:16.663272       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 16:45:16.663405       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 16:45:16.663441       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 16:45:16.663619       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0908 16:45:19.264636       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 16:45:19.264743       1 metrics.go:72] Registering metrics
	I0908 16:45:19.264855       1 controller.go:711] "Syncing nftables rules"
	I0908 16:45:26.577753       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:45:26.577821       1 main.go:301] handling current node
	I0908 16:45:36.573785       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 16:45:36.573825       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b8a90b9b774aa29588aaa3fa5a758b8e7d1c8d5c7b32c020532f18d483558cfb] <==
	I0908 16:46:31.276631       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.246.145"}
	I0908 16:46:39.084699       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.23.174"}
	I0908 16:46:56.197031       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.193.41"}
	E0908 16:46:56.635521       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40208: use of closed network connection
	E0908 16:47:15.336002       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36686: use of closed network connection
	E0908 16:47:15.538539       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36706: use of closed network connection
	I0908 16:47:15.540619       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:47:16.585745       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.140.117"}
	E0908 16:47:16.861026       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36750: use of closed network connection
	I0908 16:47:29.239217       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:48:35.761257       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:48:36.981421       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:49:47.736078       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:49:58.834872       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:50:59.560132       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:51:02.346428       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:52:08.735250       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:52:09.236794       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:53:26.210964       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:53:37.846872       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:54:33.953822       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:54:54.392428       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:55:49.705187       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:55:59.566217       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:56:04.876513       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c6957802d4d8c796dbbe22a18ff4d03e9a0a1c28723d677ef9d7f8ccfe83fb49] <==
	I0908 16:46:08.273536       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 16:46:08.273633       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0908 16:46:08.273680       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 16:46:08.273690       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 16:46:08.274855       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 16:46:08.275817       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 16:46:08.275844       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0908 16:46:08.279056       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0908 16:46:08.279089       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 16:46:08.279101       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0908 16:46:08.279151       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 16:46:08.279162       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0908 16:46:08.279169       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0908 16:46:08.279188       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 16:46:08.281355       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 16:46:08.282622       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 16:46:08.283697       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 16:46:08.288996       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0908 16:46:31.050322       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 16:46:31.065466       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 16:46:31.068955       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 16:46:31.069156       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 16:46:31.073602       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 16:46:31.077439       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 16:46:31.080161       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ee722de70a0c9ca142288e5b043b16939e14c0d99e52f4e255be08853c0aaa87] <==
	I0908 16:45:22.462790       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 16:45:22.462232       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 16:45:22.462261       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 16:45:22.477337       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 16:45:22.477381       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 16:45:22.477391       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 16:45:22.477424       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 16:45:22.477444       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0908 16:45:22.477447       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0908 16:45:22.477360       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 16:45:22.477541       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 16:45:22.477552       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 16:45:22.477560       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0908 16:45:22.477543       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 16:45:22.477645       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-849003"
	I0908 16:45:22.477715       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 16:45:22.480868       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 16:45:22.482055       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 16:45:22.483104       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 16:45:22.483120       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 16:45:22.485419       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0908 16:45:22.485432       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0908 16:45:22.485492       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0908 16:45:22.486724       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0908 16:45:22.500087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [683bb0176c6638bb090d1136d130b767db7d84834c86b8c56a69ac82daae8816] <==
	I0908 16:46:05.490215       1 server_linux.go:53] "Using iptables proxy"
	I0908 16:46:05.603216       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 16:46:05.704250       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 16:46:05.704289       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 16:46:05.704388       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 16:46:05.779355       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 16:46:05.779410       1 server_linux.go:132] "Using iptables Proxier"
	I0908 16:46:05.784032       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 16:46:05.784393       1 server.go:527] "Version info" version="v1.34.0"
	I0908 16:46:05.784410       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 16:46:05.785608       1 config.go:200] "Starting service config controller"
	I0908 16:46:05.785625       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 16:46:05.785704       1 config.go:106] "Starting endpoint slice config controller"
	I0908 16:46:05.785724       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 16:46:05.785745       1 config.go:309] "Starting node config controller"
	I0908 16:46:05.785755       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 16:46:05.785865       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 16:46:05.785893       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 16:46:05.885758       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 16:46:05.885791       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 16:46:05.885843       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 16:46:05.886069       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e5b696b0314efc8c3c1321f8b5c3a28faad5b40d4f575a54862b51b8f10289d0] <==
	I0908 16:45:16.188403       1 server_linux.go:53] "Using iptables proxy"
	I0908 16:45:16.483800       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 16:45:19.086441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 16:45:19.086469       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 16:45:19.086538       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 16:45:19.278314       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 16:45:19.278383       1 server_linux.go:132] "Using iptables Proxier"
	I0908 16:45:19.285505       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 16:45:19.285961       1 server.go:527] "Version info" version="v1.34.0"
	I0908 16:45:19.285995       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 16:45:19.287381       1 config.go:106] "Starting endpoint slice config controller"
	I0908 16:45:19.287499       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 16:45:19.287538       1 config.go:309] "Starting node config controller"
	I0908 16:45:19.287627       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 16:45:19.287596       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 16:45:19.287863       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 16:45:19.287579       1 config.go:200] "Starting service config controller"
	I0908 16:45:19.287920       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 16:45:19.387734       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 16:45:19.387744       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 16:45:19.388432       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 16:45:19.388547       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [474e40286795d74ee224a9b01126768b5d4738228415b0d3f806422b5674a5d2] <==
	I0908 16:45:17.007364       1 serving.go:386] Generated self-signed cert in-memory
	W0908 16:45:18.974112       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 16:45:18.974159       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 16:45:18.974173       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 16:45:18.974183       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 16:45:19.164605       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 16:45:19.164648       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 16:45:19.173827       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 16:45:19.173989       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 16:45:19.174070       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 16:45:19.174938       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 16:45:19.274863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 16:45:43.725317       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 16:45:43.725379       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0908 16:45:43.725414       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0908 16:45:43.725422       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0908 16:45:43.725447       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0908 16:45:43.725499       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fd8e0e93784f478cc9e9304a202d9473cdc661e4f19d17f28d7689e721b5719b] <==
	I0908 16:46:03.018493       1 serving.go:386] Generated self-signed cert in-memory
	W0908 16:46:04.868030       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 16:46:04.868078       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 16:46:04.868092       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 16:46:04.868104       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 16:46:04.978782       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 16:46:04.979812       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 16:46:04.983912       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 16:46:04.984612       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 16:46:05.061986       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 16:46:05.062230       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 16:46:05.085154       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 16:56:01 functional-849003 kubelet[5298]: E0908 16:56:01.113165    5298 manager.go:1116] Failed to create existing container: /docker/c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73/crio-7cdd471f629754504be9ef8777c268918f99181b021d7aff3f176fef714ad64c: Error finding container 7cdd471f629754504be9ef8777c268918f99181b021d7aff3f176fef714ad64c: Status 404 returned error can't find the container with id 7cdd471f629754504be9ef8777c268918f99181b021d7aff3f176fef714ad64c
	Sep 08 16:56:01 functional-849003 kubelet[5298]: E0908 16:56:01.113322    5298 manager.go:1116] Failed to create existing container: /docker/c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73/crio-f10d8c5a0f168f61a4a8dfa1a2d0f0015ea1aadf74d3187d70e8fbc292ebe1cf: Error finding container f10d8c5a0f168f61a4a8dfa1a2d0f0015ea1aadf74d3187d70e8fbc292ebe1cf: Status 404 returned error can't find the container with id f10d8c5a0f168f61a4a8dfa1a2d0f0015ea1aadf74d3187d70e8fbc292ebe1cf
	Sep 08 16:56:01 functional-849003 kubelet[5298]: E0908 16:56:01.113521    5298 manager.go:1116] Failed to create existing container: /crio-6e1e0463e821503b120773d98481949ae2b6dea82014072b9fac5993ef479963: Error finding container 6e1e0463e821503b120773d98481949ae2b6dea82014072b9fac5993ef479963: Status 404 returned error can't find the container with id 6e1e0463e821503b120773d98481949ae2b6dea82014072b9fac5993ef479963
	Sep 08 16:56:01 functional-849003 kubelet[5298]: E0908 16:56:01.113791    5298 manager.go:1116] Failed to create existing container: /docker/c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73/crio-ea76a3e43afaa12b1c39fb938ced31a637faaeabfb9e3a45dd4f3a50ed97fb87: Error finding container ea76a3e43afaa12b1c39fb938ced31a637faaeabfb9e3a45dd4f3a50ed97fb87: Status 404 returned error can't find the container with id ea76a3e43afaa12b1c39fb938ced31a637faaeabfb9e3a45dd4f3a50ed97fb87
	Sep 08 16:56:01 functional-849003 kubelet[5298]: E0908 16:56:01.113984    5298 manager.go:1116] Failed to create existing container: /docker/c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73/crio-e13de5beb3ab52efb0f0504a9aa83aeac9431b33514d7c8d98251adaacadb9aa: Error finding container e13de5beb3ab52efb0f0504a9aa83aeac9431b33514d7c8d98251adaacadb9aa: Status 404 returned error can't find the container with id e13de5beb3ab52efb0f0504a9aa83aeac9431b33514d7c8d98251adaacadb9aa
	Sep 08 16:56:01 functional-849003 kubelet[5298]: E0908 16:56:01.114152    5298 manager.go:1116] Failed to create existing container: /crio-33c76472f71b161854ccb7a6a168a00bc7f6fa9970e526b034950b08201d597a: Error finding container 33c76472f71b161854ccb7a6a168a00bc7f6fa9970e526b034950b08201d597a: Status 404 returned error can't find the container with id 33c76472f71b161854ccb7a6a168a00bc7f6fa9970e526b034950b08201d597a
	Sep 08 16:56:01 functional-849003 kubelet[5298]: E0908 16:56:01.114302    5298 manager.go:1116] Failed to create existing container: /docker/c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73/crio-d5d9655d305f75f199bd7184b6c84541549530fdf37346445363826fd484a7bb: Error finding container d5d9655d305f75f199bd7184b6c84541549530fdf37346445363826fd484a7bb: Status 404 returned error can't find the container with id d5d9655d305f75f199bd7184b6c84541549530fdf37346445363826fd484a7bb
	Sep 08 16:56:01 functional-849003 kubelet[5298]: E0908 16:56:01.114523    5298 manager.go:1116] Failed to create existing container: /crio-2a1fb0e17fe3479500106e239f4ed1cf6d3776d14d5edd3137b54d6cd008972b: Error finding container 2a1fb0e17fe3479500106e239f4ed1cf6d3776d14d5edd3137b54d6cd008972b: Status 404 returned error can't find the container with id 2a1fb0e17fe3479500106e239f4ed1cf6d3776d14d5edd3137b54d6cd008972b
	Sep 08 16:56:01 functional-849003 kubelet[5298]: E0908 16:56:01.114777    5298 manager.go:1116] Failed to create existing container: /docker/c2e4676ee659a33db65a2ef7f640afd8b72a175f83af82551dc3f8b118c06d73/crio-33c76472f71b161854ccb7a6a168a00bc7f6fa9970e526b034950b08201d597a: Error finding container 33c76472f71b161854ccb7a6a168a00bc7f6fa9970e526b034950b08201d597a: Status 404 returned error can't find the container with id 33c76472f71b161854ccb7a6a168a00bc7f6fa9970e526b034950b08201d597a
	Sep 08 16:56:01 functional-849003 kubelet[5298]: E0908 16:56:01.255768    5298 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757350561255588621  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303428}  inodes_used:{value:134}}"
	Sep 08 16:56:01 functional-849003 kubelet[5298]: E0908 16:56:01.255798    5298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757350561255588621  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303428}  inodes_used:{value:134}}"
	Sep 08 16:56:06 functional-849003 kubelet[5298]: E0908 16:56:06.973198    5298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-ntvh9" podUID="4e4947d1-e5f2-4df0-95c1-5d2035e8db4b"
	Sep 08 16:56:09 functional-849003 kubelet[5298]: E0908 16:56:09.972498    5298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-gtttx" podUID="4f7498ad-2577-451c-9037-cc5cd38df3c8"
	Sep 08 16:56:11 functional-849003 kubelet[5298]: E0908 16:56:11.257276    5298 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757350571257056185  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303428}  inodes_used:{value:134}}"
	Sep 08 16:56:11 functional-849003 kubelet[5298]: E0908 16:56:11.257314    5298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757350571257056185  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303428}  inodes_used:{value:134}}"
	Sep 08 16:56:20 functional-849003 kubelet[5298]: E0908 16:56:20.973196    5298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-gtttx" podUID="4f7498ad-2577-451c-9037-cc5cd38df3c8"
	Sep 08 16:56:20 functional-849003 kubelet[5298]: E0908 16:56:20.973215    5298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-ntvh9" podUID="4e4947d1-e5f2-4df0-95c1-5d2035e8db4b"
	Sep 08 16:56:21 functional-849003 kubelet[5298]: E0908 16:56:21.258998    5298 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757350581258797548  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303428}  inodes_used:{value:134}}"
	Sep 08 16:56:21 functional-849003 kubelet[5298]: E0908 16:56:21.259035    5298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757350581258797548  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303428}  inodes_used:{value:134}}"
	Sep 08 16:56:31 functional-849003 kubelet[5298]: E0908 16:56:31.260482    5298 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757350591260258367  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303428}  inodes_used:{value:134}}"
	Sep 08 16:56:31 functional-849003 kubelet[5298]: E0908 16:56:31.260521    5298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757350591260258367  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303428}  inodes_used:{value:134}}"
	Sep 08 16:56:32 functional-849003 kubelet[5298]: E0908 16:56:32.972964    5298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-gtttx" podUID="4f7498ad-2577-451c-9037-cc5cd38df3c8"
	Sep 08 16:56:33 functional-849003 kubelet[5298]: E0908 16:56:33.972827    5298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-ntvh9" podUID="4e4947d1-e5f2-4df0-95c1-5d2035e8db4b"
	Sep 08 16:56:41 functional-849003 kubelet[5298]: E0908 16:56:41.262290    5298 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757350601262057490  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303428}  inodes_used:{value:134}}"
	Sep 08 16:56:41 functional-849003 kubelet[5298]: E0908 16:56:41.262337    5298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757350601262057490  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303428}  inodes_used:{value:134}}"
	
	
	==> kubernetes-dashboard [2edbbbf1e6f5f813f612f421fe0963405fdda9dd19b22fbd9446a7108a358348] <==
	2025/09/08 16:46:39 Using namespace: kubernetes-dashboard
	2025/09/08 16:46:39 Using in-cluster config to connect to apiserver
	2025/09/08 16:46:39 Using secret token for csrf signing
	2025/09/08 16:46:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/08 16:46:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/08 16:46:39 Successful initial request to the apiserver, version: v1.34.0
	2025/09/08 16:46:39 Generating JWE encryption key
	2025/09/08 16:46:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/08 16:46:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/08 16:46:40 Initializing JWE encryption key from synchronized object
	2025/09/08 16:46:40 Creating in-cluster Sidecar client
	2025/09/08 16:46:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/08 16:46:40 Serving insecurely on HTTP port: 9090
	2025/09/08 16:47:10 Successful request to sidecar
	2025/09/08 16:46:39 Starting overwatch
	
	
	==> storage-provisioner [0ab4c4a8590d03c376dc25da9cf3e57de63fca6129b89071a2bcca3f895f86a6] <==
	I0908 16:45:29.459859       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 16:45:29.468320       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 16:45:29.468372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 16:45:29.472722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:45:32.927578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:45:37.188403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:45:40.786971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [67f4ae85f4ba98339acf0d7bf5e3058d6289f3bce8e48f8816d4a144a2e54abd] <==
	W0908 16:56:15.357208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:17.359972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:17.364002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:19.366880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:19.370941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:21.373979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:21.378173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:23.381064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:23.385302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:25.388293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:25.393902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:27.397299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:27.401765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:29.404977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:29.408859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:31.411702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:31.417414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:33.420998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:33.425404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:35.428622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:35.433984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:37.437368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:37.441763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:39.445089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:56:39.449592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-849003 -n functional-849003
helpers_test.go:269: (dbg) Run:  kubectl --context functional-849003 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-ntvh9 hello-node-connect-7d85dfc575-gtttx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-849003 describe pod busybox-mount hello-node-75c85bcc94-ntvh9 hello-node-connect-7d85dfc575-gtttx
helpers_test.go:290: (dbg) kubectl --context functional-849003 describe pod busybox-mount hello-node-75c85bcc94-ntvh9 hello-node-connect-7d85dfc575-gtttx:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-849003/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 16:46:30 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://a9833e443407c8f67c93b53ffa24e369bdd4511a055b68c0e0014f90160ee7ad
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 16:46:33 +0000
	      Finished:     Mon, 08 Sep 2025 16:46:33 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7455h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7455h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-849003
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.055s (3.055s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-ntvh9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-849003/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 16:46:27 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bw4gx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bw4gx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-ntvh9 to functional-849003
	  Normal   Pulling    7m16s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m16s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m16s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x43 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     8s (x43 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-gtttx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-849003/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 16:46:38 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z4vrk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z4vrk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-gtttx to functional-849003
	  Normal   Pulling    6m54s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m54s (x5 over 9m52s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m54s (x5 over 9m52s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m48s (x20 over 9m51s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m36s (x21 over 9m51s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-849003 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-849003 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-ntvh9" [4e4947d1-e5f2-4df0-95c1-5d2035e8db4b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-849003 -n functional-849003
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-08 16:56:27.639554429 +0000 UTC m=+1195.564188446
functional_test.go:1460: (dbg) Run:  kubectl --context functional-849003 describe po hello-node-75c85bcc94-ntvh9 -n default
functional_test.go:1460: (dbg) kubectl --context functional-849003 describe po hello-node-75c85bcc94-ntvh9 -n default:
Name:             hello-node-75c85bcc94-ntvh9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-849003/192.168.49.2
Start Time:       Mon, 08 Sep 2025 16:46:27 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bw4gx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bw4gx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-ntvh9 to functional-849003
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m2s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-849003 logs hello-node-75c85bcc94-ntvh9 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-849003 logs hello-node-75c85bcc94-ntvh9 -n default: exit status 1 (72.230557ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-ntvh9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-849003 logs hello-node-75c85bcc94-ntvh9 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 service --namespace=default --https --url hello-node: exit status 115 (516.920255ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31002
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-849003 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 service hello-node --url --format={{.IP}}: exit status 115 (521.604442ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-849003 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 service hello-node --url: exit status 115 (510.305903ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31002
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-849003 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31002
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    

Test pass (298/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.09
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 12.52
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.21
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.12
21 TestBinaryMirror 0.8
22 TestOffline 91.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 180.53
31 TestAddons/serial/GCPAuth/Namespaces 0.19
35 TestAddons/parallel/Registry 17.02
36 TestAddons/parallel/RegistryCreds 0.65
38 TestAddons/parallel/InspektorGadget 6.25
39 TestAddons/parallel/MetricsServer 6.12
41 TestAddons/parallel/CSI 48.21
42 TestAddons/parallel/Headlamp 18.17
43 TestAddons/parallel/CloudSpanner 5.57
44 TestAddons/parallel/LocalPath 58.73
45 TestAddons/parallel/NvidiaDevicePlugin 5.89
46 TestAddons/parallel/Yakd 10.67
47 TestAddons/parallel/AmdGpuDevicePlugin 5.46
48 TestAddons/StoppedEnableDisable 12.11
49 TestCertOptions 29.64
50 TestCertExpiration 224.32
52 TestForceSystemdFlag 32.24
53 TestForceSystemdEnv 25.16
55 TestKVMDriverInstallOrUpdate 2.25
59 TestErrorSpam/setup 22.02
60 TestErrorSpam/start 0.57
61 TestErrorSpam/status 0.87
62 TestErrorSpam/pause 1.5
63 TestErrorSpam/unpause 1.81
64 TestErrorSpam/stop 1.36
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 70.76
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 28.49
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.93
76 TestFunctional/serial/CacheCmd/cache/add_local 1.99
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 37.98
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.32
87 TestFunctional/serial/LogsFileCmd 1.37
88 TestFunctional/serial/InvalidService 4
90 TestFunctional/parallel/ConfigCmd 0.34
91 TestFunctional/parallel/DashboardCmd 12.87
92 TestFunctional/parallel/DryRun 0.35
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.89
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 48.39
102 TestFunctional/parallel/SSHCmd 0.48
103 TestFunctional/parallel/CpCmd 1.59
104 TestFunctional/parallel/MySQL 20.79
105 TestFunctional/parallel/FileSync 0.25
106 TestFunctional/parallel/CertSync 1.48
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
114 TestFunctional/parallel/License 0.32
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
117 TestFunctional/parallel/ProfileCmd/profile_list 0.37
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
119 TestFunctional/parallel/MountCmd/any-port 7.5
120 TestFunctional/parallel/MountCmd/specific-port 1.61
121 TestFunctional/parallel/MountCmd/VerifyCleanup 0.98
122 TestFunctional/parallel/Version/short 0.05
123 TestFunctional/parallel/Version/components 0.45
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
128 TestFunctional/parallel/ImageCommands/ImageBuild 3.75
129 TestFunctional/parallel/ImageCommands/Setup 1.71
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.02
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.68
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.39
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.19
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ServiceCmd/List 1.68
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.67
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 150.7
164 TestMultiControlPlane/serial/DeployApp 7.76
165 TestMultiControlPlane/serial/PingHostFromPods 1.05
166 TestMultiControlPlane/serial/AddWorkerNode 57.26
167 TestMultiControlPlane/serial/NodeLabels 0.06
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
169 TestMultiControlPlane/serial/CopyFile 15.54
170 TestMultiControlPlane/serial/StopSecondaryNode 12.54
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
172 TestMultiControlPlane/serial/RestartSecondaryNode 22.05
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.82
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 137.99
175 TestMultiControlPlane/serial/DeleteSecondaryNode 13.42
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
177 TestMultiControlPlane/serial/StopCluster 35.56
178 TestMultiControlPlane/serial/RestartCluster 61.43
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
180 TestMultiControlPlane/serial/AddSecondaryNode 33.75
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
185 TestJSONOutput/start/Command 69.12
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.67
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.57
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.78
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.2
210 TestKicCustomNetwork/create_custom_network 37.25
211 TestKicCustomNetwork/use_default_bridge_network 26.52
212 TestKicExistingNetwork 25.27
213 TestKicCustomSubnet 29.37
214 TestKicStaticIP 24.47
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 55.85
219 TestMountStart/serial/StartWithMountFirst 8.4
220 TestMountStart/serial/VerifyMountFirst 0.25
221 TestMountStart/serial/StartWithMountSecond 5.59
222 TestMountStart/serial/VerifyMountSecond 0.24
223 TestMountStart/serial/DeleteFirst 1.59
224 TestMountStart/serial/VerifyMountPostDelete 0.24
225 TestMountStart/serial/Stop 1.18
226 TestMountStart/serial/RestartStopped 8.21
227 TestMountStart/serial/VerifyMountPostStop 0.24
230 TestMultiNode/serial/FreshStart2Nodes 128.1
231 TestMultiNode/serial/DeployApp2Nodes 5.57
232 TestMultiNode/serial/PingHostFrom2Pods 0.72
233 TestMultiNode/serial/AddNode 52.76
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.6
236 TestMultiNode/serial/CopyFile 8.86
237 TestMultiNode/serial/StopNode 2.08
238 TestMultiNode/serial/StartAfterStop 7.01
239 TestMultiNode/serial/RestartKeepsNodes 71.74
240 TestMultiNode/serial/DeleteNode 5.18
241 TestMultiNode/serial/StopMultiNode 23.71
242 TestMultiNode/serial/RestartMultiNode 48.67
243 TestMultiNode/serial/ValidateNameConflict 26.36
248 TestPreload 118.89
250 TestScheduledStopUnix 100.47
253 TestInsufficientStorage 12.28
254 TestRunningBinaryUpgrade 41.35
256 TestKubernetesUpgrade 351.24
257 TestMissingContainerUpgrade 94.73
258 TestStoppedBinaryUpgrade/Setup 2.64
259 TestStoppedBinaryUpgrade/Upgrade 68.7
260 TestStoppedBinaryUpgrade/MinikubeLogs 1
269 TestPause/serial/Start 73.48
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
272 TestNoKubernetes/serial/StartWithK8s 27.01
280 TestNetworkPlugins/group/false 3.53
281 TestNoKubernetes/serial/StartWithStopK8s 6.29
285 TestNoKubernetes/serial/Start 7.95
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
287 TestNoKubernetes/serial/ProfileList 16.45
288 TestNoKubernetes/serial/Stop 1.19
289 TestNoKubernetes/serial/StartNoArgs 7.25
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
291 TestPause/serial/SecondStartNoReconfiguration 18.18
292 TestPause/serial/Pause 0.84
293 TestPause/serial/VerifyStatus 0.33
294 TestPause/serial/Unpause 0.68
295 TestPause/serial/PauseAgain 0.81
296 TestPause/serial/DeletePaused 2.79
297 TestPause/serial/VerifyDeletedResources 15.13
299 TestStartStop/group/old-k8s-version/serial/FirstStart 54.08
301 TestStartStop/group/no-preload/serial/FirstStart 56.97
302 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1
304 TestStartStop/group/old-k8s-version/serial/Stop 12.02
305 TestStartStop/group/no-preload/serial/DeployApp 10.25
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
307 TestStartStop/group/old-k8s-version/serial/SecondStart 46.22
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
309 TestStartStop/group/no-preload/serial/Stop 11.92
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
311 TestStartStop/group/no-preload/serial/SecondStart 48.45
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
315 TestStartStop/group/old-k8s-version/serial/Pause 2.8
317 TestStartStop/group/embed-certs/serial/FirstStart 77.14
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.9
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
323 TestStartStop/group/no-preload/serial/Pause 2.59
325 TestStartStop/group/newest-cni/serial/FirstStart 31.71
326 TestNetworkPlugins/group/auto/Start 72.1
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.81
329 TestStartStop/group/newest-cni/serial/Stop 1.2
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
331 TestStartStop/group/newest-cni/serial/SecondStart 15
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
335 TestStartStop/group/newest-cni/serial/Pause 2.97
336 TestNetworkPlugins/group/kindnet/Start 74.93
337 TestStartStop/group/embed-certs/serial/DeployApp 9.31
338 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
339 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
340 TestStartStop/group/embed-certs/serial/Stop 11.91
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.03
343 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
344 TestStartStop/group/embed-certs/serial/SecondStart 49.3
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
346 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.33
347 TestNetworkPlugins/group/auto/KubeletFlags 0.29
348 TestNetworkPlugins/group/auto/NetCatPod 9.25
349 TestNetworkPlugins/group/auto/DNS 0.18
350 TestNetworkPlugins/group/auto/Localhost 0.14
351 TestNetworkPlugins/group/auto/HairPin 0.19
352 TestNetworkPlugins/group/calico/Start 61.33
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
356 TestNetworkPlugins/group/kindnet/NetCatPod 10.19
357 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
359 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
360 TestStartStop/group/embed-certs/serial/Pause 2.68
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
362 TestNetworkPlugins/group/kindnet/DNS 0.15
363 TestNetworkPlugins/group/kindnet/Localhost 0.14
364 TestNetworkPlugins/group/kindnet/HairPin 0.15
365 TestNetworkPlugins/group/custom-flannel/Start 62.99
366 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
367 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.82
368 TestNetworkPlugins/group/enable-default-cni/Start 74.68
369 TestNetworkPlugins/group/flannel/Start 60.22
370 TestNetworkPlugins/group/calico/ControllerPod 6.01
371 TestNetworkPlugins/group/calico/KubeletFlags 0.3
372 TestNetworkPlugins/group/calico/NetCatPod 11.23
373 TestNetworkPlugins/group/calico/DNS 0.13
374 TestNetworkPlugins/group/calico/Localhost 0.11
375 TestNetworkPlugins/group/calico/HairPin 0.11
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.19
378 TestNetworkPlugins/group/custom-flannel/DNS 0.14
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
381 TestNetworkPlugins/group/bridge/Start 69.94
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
386 TestNetworkPlugins/group/flannel/NetCatPod 10.31
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
390 TestNetworkPlugins/group/flannel/DNS 0.16
391 TestNetworkPlugins/group/flannel/Localhost 0.12
392 TestNetworkPlugins/group/flannel/HairPin 0.1
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
394 TestNetworkPlugins/group/bridge/NetCatPod 9.18
395 TestNetworkPlugins/group/bridge/DNS 0.12
396 TestNetworkPlugins/group/bridge/Localhost 0.1
397 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.28.0/json-events (13.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-956137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-956137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.086765s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 16:36:45.199368   11141 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0908 16:36:45.199500   11141 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-7450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-956137
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-956137: exit status 85 (60.475115ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-956137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-956137 │ jenkins │ v1.36.0 │ 08 Sep 25 16:36 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 16:36:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 16:36:32.153570   11153 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:36:32.153800   11153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:36:32.153812   11153 out.go:374] Setting ErrFile to fd 2...
	I0908 16:36:32.153817   11153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:36:32.153991   11153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
	W0908 16:36:32.154115   11153 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21504-7450/.minikube/config/config.json: open /home/jenkins/minikube-integration/21504-7450/.minikube/config/config.json: no such file or directory
	I0908 16:36:32.154671   11153 out.go:368] Setting JSON to true
	I0908 16:36:32.155546   11153 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1136,"bootTime":1757348256,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 16:36:32.155633   11153 start.go:140] virtualization: kvm guest
	I0908 16:36:32.158035   11153 out.go:99] [download-only-956137] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0908 16:36:32.158190   11153 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21504-7450/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 16:36:32.158218   11153 notify.go:220] Checking for updates...
	I0908 16:36:32.159591   11153 out.go:171] MINIKUBE_LOCATION=21504
	I0908 16:36:32.161132   11153 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 16:36:32.162480   11153 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	I0908 16:36:32.163815   11153 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	I0908 16:36:32.165345   11153 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 16:36:32.167660   11153 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 16:36:32.167928   11153 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 16:36:32.194200   11153 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 16:36:32.194262   11153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 16:36:32.568093   11153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 16:36:32.558321958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 16:36:32.568193   11153 docker.go:318] overlay module found
	I0908 16:36:32.569750   11153 out.go:99] Using the docker driver based on user configuration
	I0908 16:36:32.569774   11153 start.go:304] selected driver: docker
	I0908 16:36:32.569779   11153 start.go:918] validating driver "docker" against <nil>
	I0908 16:36:32.569849   11153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 16:36:32.623799   11153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 16:36:32.613914306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 16:36:32.623981   11153 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 16:36:32.624524   11153 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0908 16:36:32.624709   11153 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 16:36:32.626434   11153 out.go:171] Using Docker driver with root privileges
	I0908 16:36:32.627781   11153 cni.go:84] Creating CNI manager for ""
	I0908 16:36:32.627837   11153 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 16:36:32.627847   11153 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 16:36:32.627915   11153 start.go:348] cluster config:
	{Name:download-only-956137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-956137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:36:32.629033   11153 out.go:99] Starting "download-only-956137" primary control-plane node in "download-only-956137" cluster
	I0908 16:36:32.629052   11153 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 16:36:32.630192   11153 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 16:36:32.630217   11153 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 16:36:32.630258   11153 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 16:36:32.646441   11153 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 16:36:32.646655   11153 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 16:36:32.646772   11153 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 16:36:32.725279   11153 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0908 16:36:32.725334   11153 cache.go:58] Caching tarball of preloaded images
	I0908 16:36:32.725478   11153 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 16:36:32.727603   11153 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 16:36:32.727641   11153 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 16:36:32.829800   11153 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21504-7450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0908 16:36:39.723740   11153 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	
	
	* The control-plane node download-only-956137 host does not exist
	  To start a cluster, run: "minikube start -p download-only-956137"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-956137
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (12.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-672420 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-672420 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.52197472s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (12.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 16:36:58.122117   11141 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0908 16:36:58.122156   11141 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-7450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-672420
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-672420: exit status 85 (61.481658ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-956137 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-956137 │ jenkins │ v1.36.0 │ 08 Sep 25 16:36 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 16:36 UTC │ 08 Sep 25 16:36 UTC │
	│ delete  │ -p download-only-956137                                                                                                                                                   │ download-only-956137 │ jenkins │ v1.36.0 │ 08 Sep 25 16:36 UTC │ 08 Sep 25 16:36 UTC │
	│ start   │ -o=json --download-only -p download-only-672420 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-672420 │ jenkins │ v1.36.0 │ 08 Sep 25 16:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 16:36:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 16:36:45.641630   11512 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:36:45.641765   11512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:36:45.641770   11512 out.go:374] Setting ErrFile to fd 2...
	I0908 16:36:45.641774   11512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:36:45.641948   11512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
	I0908 16:36:45.642492   11512 out.go:368] Setting JSON to true
	I0908 16:36:45.643273   11512 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1150,"bootTime":1757348256,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 16:36:45.643360   11512 start.go:140] virtualization: kvm guest
	I0908 16:36:45.645346   11512 out.go:99] [download-only-672420] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 16:36:45.645524   11512 notify.go:220] Checking for updates...
	I0908 16:36:45.646932   11512 out.go:171] MINIKUBE_LOCATION=21504
	I0908 16:36:45.648476   11512 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 16:36:45.649960   11512 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	I0908 16:36:45.651306   11512 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	I0908 16:36:45.652561   11512 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 16:36:45.654832   11512 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 16:36:45.655051   11512 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 16:36:45.677441   11512 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 16:36:45.677503   11512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 16:36:45.727643   11512 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 16:36:45.718688501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 16:36:45.727745   11512 docker.go:318] overlay module found
	I0908 16:36:45.729434   11512 out.go:99] Using the docker driver based on user configuration
	I0908 16:36:45.729463   11512 start.go:304] selected driver: docker
	I0908 16:36:45.729468   11512 start.go:918] validating driver "docker" against <nil>
	I0908 16:36:45.729540   11512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 16:36:45.779373   11512 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 16:36:45.770519189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 16:36:45.779522   11512 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 16:36:45.780049   11512 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0908 16:36:45.780219   11512 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 16:36:45.782046   11512 out.go:171] Using Docker driver with root privileges
	I0908 16:36:45.783683   11512 cni.go:84] Creating CNI manager for ""
	I0908 16:36:45.783778   11512 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 16:36:45.783790   11512 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 16:36:45.783881   11512 start.go:348] cluster config:
	{Name:download-only-672420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-672420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:36:45.785466   11512 out.go:99] Starting "download-only-672420" primary control-plane node in "download-only-672420" cluster
	I0908 16:36:45.785490   11512 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 16:36:45.786864   11512 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 16:36:45.786898   11512 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 16:36:45.787031   11512 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 16:36:45.803258   11512 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 16:36:45.803381   11512 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 16:36:45.803400   11512 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 16:36:45.803404   11512 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 16:36:45.803411   11512 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 16:36:45.890129   11512 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 16:36:45.890157   11512 cache.go:58] Caching tarball of preloaded images
	I0908 16:36:45.890325   11512 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 16:36:45.892279   11512 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0908 16:36:45.892298   11512 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 16:36:45.990471   11512 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21504-7450/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-672420 host does not exist
	  To start a cluster, run: "minikube start -p download-only-672420"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-672420
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.12s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-192642 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-192642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-192642
--- PASS: TestDownloadOnlyKic (1.12s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 16:36:59.913207   11141 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-616006 --alsologtostderr --binary-mirror http://127.0.0.1:43461 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-616006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-616006
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (91.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-068817 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-068817 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m29.668312994s)
helpers_test.go:175: Cleaning up "offline-crio-068817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-068817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-068817: (2.313119043s)
--- PASS: TestOffline (91.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-739733
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-739733: exit status 85 (52.034669ms)

                                                
                                                
-- stdout --
	* Profile "addons-739733" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-739733"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-739733
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-739733: exit status 85 (51.856353ms)

                                                
                                                
-- stdout --
	* Profile "addons-739733" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-739733"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (180.53s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-739733 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-739733 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m0.53367883s)
--- PASS: TestAddons/Setup (180.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-739733 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-739733 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.029947ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-v7wsv" [2000242d-23c3-4a44-8db8-efd30c1097d4] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002714726s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-wstmd" [e7772012-dac5-420a-94cf-1bf5e180f021] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003801823s
addons_test.go:392: (dbg) Run:  kubectl --context addons-739733 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-739733 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-739733 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.250702442s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 ip
2025/09/08 16:40:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.02s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.920906ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-739733
addons_test.go:332: (dbg) Run:  kubectl --context addons-739733 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-q4xzf" [f3cc0e43-0707-4c3b-97f9-e7a8d044b84f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004098894s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.12s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.163345ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-cgkpr" [972f14e5-8beb-4f18-9642-a40844fe5820] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003485733s
addons_test.go:463: (dbg) Run:  kubectl --context addons-739733 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-739733 addons disable metrics-server --alsologtostderr -v=1: (1.023460621s)
--- PASS: TestAddons/parallel/MetricsServer (6.12s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 16:40:35.974582   11141 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 16:40:35.977242   11141 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 16:40:35.977272   11141 kapi.go:107] duration metric: took 2.711804ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 2.726524ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-739733 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-739733 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [76cd6212-0ba1-43c3-9f14-7d838a4bfdf8] Pending
helpers_test.go:352: "task-pv-pod" [76cd6212-0ba1-43c3-9f14-7d838a4bfdf8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [76cd6212-0ba1-43c3-9f14-7d838a4bfdf8] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.002531265s
addons_test.go:572: (dbg) Run:  kubectl --context addons-739733 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-739733 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-739733 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-739733 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-739733 delete pod task-pv-pod: (1.118087828s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-739733 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-739733 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-739733 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d0a53fe0-535a-465f-a430-32d6bd23b833] Pending
helpers_test.go:352: "task-pv-pod-restore" [d0a53fe0-535a-465f-a430-32d6bd23b833] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d0a53fe0-535a-465f-a430-32d6bd23b833] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004052873s
addons_test.go:614: (dbg) Run:  kubectl --context addons-739733 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-739733 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-739733 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-739733 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.562080294s)
--- PASS: TestAddons/parallel/CSI (48.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-739733 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-739733 --alsologtostderr -v=1: (1.461852604s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-stjgs" [837bda38-8cab-42a3-b0bb-71546d1da1b4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-stjgs" [837bda38-8cab-42a3-b0bb-71546d1da1b4] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.017419669s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-739733 addons disable headlamp --alsologtostderr -v=1: (5.68713964s)
--- PASS: TestAddons/parallel/Headlamp (18.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-czxln" [5b0e7a0b-bc5a-4fed-b9e6-9d84ffe32695] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004148801s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.73s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-739733 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-739733 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [1d5b7bcd-28db-4091-a6dc-0bd68aebc610] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [1d5b7bcd-28db-4091-a6dc-0bd68aebc610] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [1d5b7bcd-28db-4091-a6dc-0bd68aebc610] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003475369s
addons_test.go:967: (dbg) Run:  kubectl --context addons-739733 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 ssh "cat /opt/local-path-provisioner/pvc-179b132b-c58d-4324-a2e4-e9d22ba4b122_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-739733 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-739733 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-739733 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.791707892s)
--- PASS: TestAddons/parallel/LocalPath (58.73s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-7gcdp" [e50230cb-dc63-4d6e-86bc-25517ebbaced] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003562088s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.89s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-dl4s8" [8ffa9007-f9cc-4939-b91e-bba9b01e60bf] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003885517s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-739733 addons disable yakd --alsologtostderr -v=1: (5.667662004s)
--- PASS: TestAddons/parallel/Yakd (10.67s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-4rtmc" [856bd7fb-7aa1-41c4-9327-3ec267b88a61] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.007046036s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.11s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-739733
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-739733: (11.850158809s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-739733
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-739733
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-739733
--- PASS: TestAddons/StoppedEnableDisable (12.11s)

                                                
                                    
x
+
TestCertOptions (29.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-502818 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-502818 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.147150683s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-502818 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-502818 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-502818 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-502818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-502818
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-502818: (1.892419954s)
--- PASS: TestCertOptions (29.64s)

                                                
                                    
x
+
TestCertExpiration (224.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-071084 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-071084 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.854719022s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-071084 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-071084 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.132514752s)
helpers_test.go:175: Cleaning up "cert-expiration-071084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-071084
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-071084: (2.329550352s)
--- PASS: TestCertExpiration (224.32s)

                                                
                                    
x
+
TestForceSystemdFlag (32.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-463659 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-463659 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.427411024s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-463659 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-463659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-463659
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-463659: (2.504122834s)
--- PASS: TestForceSystemdFlag (32.24s)

                                                
                                    
x
+
TestForceSystemdEnv (25.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-964426 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-964426 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.7758291s)
helpers_test.go:175: Cleaning up "force-systemd-env-964426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-964426
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-964426: (2.382364738s)
--- PASS: TestForceSystemdEnv (25.16s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.25s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0908 17:23:38.254680   11141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 17:23:38.254794   11141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0908 17:23:38.291251   11141 install.go:62] docker-machine-driver-kvm2: exit status 1
W0908 17:23:38.291431   11141 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 17:23:38.291485   11141 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate863979197/001/docker-machine-driver-kvm2
I0908 17:23:38.521451   11141 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate863979197/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000585640 gz:0xc000585648 tar:0xc0005855f0 tar.bz2:0xc000585600 tar.gz:0xc000585610 tar.xz:0xc000585620 tar.zst:0xc000585630 tbz2:0xc000585600 tgz:0xc000585610 txz:0xc000585620 tzst:0xc000585630 xz:0xc000585650 zip:0xc000585660 zst:0xc000585658] Getters:map[file:0xc001941b90 http:0xc00054d4a0 https:0xc00054d4f0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0908 17:23:38.521504   11141 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate863979197/001/docker-machine-driver-kvm2
I0908 17:23:39.672399   11141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 17:23:39.672500   11141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0908 17:23:39.701553   11141 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0908 17:23:39.701589   11141 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0908 17:23:39.701687   11141 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 17:23:39.701720   11141 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate863979197/002/docker-machine-driver-kvm2
I0908 17:23:39.729610   11141 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate863979197/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000585640 gz:0xc000585648 tar:0xc0005855f0 tar.bz2:0xc000585600 tar.gz:0xc000585610 tar.xz:0xc000585620 tar.zst:0xc000585630 tbz2:0xc000585600 tgz:0xc000585610 txz:0xc000585620 tzst:0xc000585630 xz:0xc000585650 zip:0xc000585660 zst:0xc000585658] Getters:map[file:0xc0021f0130 http:0xc000849180 https:0xc0008491d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0908 17:23:39.729691   11141 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate863979197/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (2.25s)

                                                
                                    
x
+
TestErrorSpam/setup (22.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-635641 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-635641 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-635641 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-635641 --driver=docker  --container-runtime=crio: (22.015456776s)
--- PASS: TestErrorSpam/setup (22.02s)

                                                
                                    
x
+
TestErrorSpam/start (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 start --dry-run
--- PASS: TestErrorSpam/start (0.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 stop: (1.180875423s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-635641 --log_dir /tmp/nospam-635641 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21504-7450/.minikube/files/etc/test/nested/copy/11141/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-849003 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0908 16:45:01.992071   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:02.002121   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:02.013770   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:02.035079   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:02.076559   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:02.158027   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:02.319594   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:02.641340   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:03.283454   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:04.564918   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-849003 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m10.763763035s)
--- PASS: TestFunctional/serial/StartWithProxy (70.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 16:45:06.522609   11141 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-849003 --alsologtostderr -v=8
E0908 16:45:07.127268   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:12.249469   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:22.491679   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-849003 --alsologtostderr -v=8: (28.48574673s)
functional_test.go:678: soft start took 28.486410801s for "functional-849003" cluster.
I0908 16:45:35.008689   11141 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (28.49s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-849003 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-849003 cache add registry.k8s.io/pause:3.3: (1.003282801s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-849003 /tmp/TestFunctionalserialCacheCmdcacheadd_local3023359207/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 cache add minikube-local-cache-test:functional-849003
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-849003 cache add minikube-local-cache-test:functional-849003: (1.662015111s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 cache delete minikube-local-cache-test:functional-849003
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-849003
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (268.613788ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 kubectl -- --context functional-849003 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-849003 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-849003 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 16:45:42.973086   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-849003 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.98405837s)
functional_test.go:776: restart took 37.98419003s for "functional-849003" cluster.
I0908 16:46:20.402718   11141 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (37.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-849003 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-849003 logs: (1.316437458s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 logs --file /tmp/TestFunctionalserialLogsFileCmd2000086447/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-849003 logs --file /tmp/TestFunctionalserialLogsFileCmd2000086447/001/logs.txt: (1.363839785s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-849003 apply -f testdata/invalidsvc.yaml
E0908 16:46:23.934704   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-849003
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-849003: exit status 115 (314.784142ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31807 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-849003 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 config get cpus: exit status 14 (68.626939ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 config get cpus: exit status 14 (52.012685ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-849003 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-849003 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 49115: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-849003 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-849003 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (154.53824ms)

                                                
                                                
-- stdout --
	* [functional-849003] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 16:46:29.831263   48631 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:46:29.832969   48631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:46:29.832992   48631 out.go:374] Setting ErrFile to fd 2...
	I0908 16:46:29.833000   48631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:46:29.833328   48631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
	I0908 16:46:29.833899   48631 out.go:368] Setting JSON to false
	I0908 16:46:29.834799   48631 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1734,"bootTime":1757348256,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 16:46:29.834892   48631 start.go:140] virtualization: kvm guest
	I0908 16:46:29.836733   48631 out.go:179] * [functional-849003] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 16:46:29.838839   48631 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 16:46:29.838884   48631 notify.go:220] Checking for updates...
	I0908 16:46:29.841696   48631 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 16:46:29.843206   48631 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	I0908 16:46:29.844587   48631 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	I0908 16:46:29.846160   48631 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 16:46:29.847553   48631 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 16:46:29.849271   48631 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:46:29.849898   48631 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 16:46:29.875360   48631 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 16:46:29.875481   48631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 16:46:29.929706   48631 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 16:46:29.919886656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 16:46:29.929848   48631 docker.go:318] overlay module found
	I0908 16:46:29.931926   48631 out.go:179] * Using the docker driver based on existing profile
	I0908 16:46:29.933208   48631 start.go:304] selected driver: docker
	I0908 16:46:29.933231   48631 start.go:918] validating driver "docker" against &{Name:functional-849003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-849003 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:46:29.933362   48631 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 16:46:29.936101   48631 out.go:203] 
	W0908 16:46:29.937516   48631 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 16:46:29.938840   48631 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-849003 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-849003 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-849003 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (155.666542ms)

                                                
                                                
-- stdout --
	* [functional-849003] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 16:46:29.684372   48487 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:46:29.684477   48487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:46:29.684482   48487 out.go:374] Setting ErrFile to fd 2...
	I0908 16:46:29.684486   48487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:46:29.684803   48487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
	I0908 16:46:29.685373   48487 out.go:368] Setting JSON to false
	I0908 16:46:29.686485   48487 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1734,"bootTime":1757348256,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 16:46:29.686596   48487 start.go:140] virtualization: kvm guest
	I0908 16:46:29.688910   48487 out.go:179] * [functional-849003] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 16:46:29.690279   48487 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 16:46:29.690319   48487 notify.go:220] Checking for updates...
	I0908 16:46:29.692720   48487 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 16:46:29.693907   48487 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	I0908 16:46:29.695035   48487 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	I0908 16:46:29.696099   48487 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 16:46:29.697318   48487 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 16:46:29.698865   48487 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:46:29.699340   48487 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 16:46:29.725836   48487 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 16:46:29.725975   48487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 16:46:29.775557   48487 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 16:46:29.766545512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 16:46:29.775705   48487 docker.go:318] overlay module found
	I0908 16:46:29.777751   48487 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 16:46:29.779267   48487 start.go:304] selected driver: docker
	I0908 16:46:29.779291   48487 start.go:918] validating driver "docker" against &{Name:functional-849003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-849003 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:46:29.779410   48487 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 16:46:29.781524   48487 out.go:203] 
	W0908 16:46:29.782938   48487 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 16:46:29.784319   48487 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [b37b30b9-e832-442f-94bb-1064e034d287] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004212453s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-849003 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-849003 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-849003 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-849003 apply -f testdata/storage-provisioner/pod.yaml
I0908 16:46:33.526345   11141 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8f9c2341-c580-4f20-9591-aac3afd97da3] Pending
helpers_test.go:352: "sp-pod" [8f9c2341-c580-4f20-9591-aac3afd97da3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8f9c2341-c580-4f20-9591-aac3afd97da3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.003820824s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-849003 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-849003 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-849003 apply -f testdata/storage-provisioner/pod.yaml
I0908 16:46:57.435712   11141 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [907393dd-8fa3-4951-93ad-6179477cded2] Pending
helpers_test.go:352: "sp-pod" [907393dd-8fa3-4951-93ad-6179477cded2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [907393dd-8fa3-4951-93ad-6179477cded2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003347656s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-849003 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.39s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh -n functional-849003 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 cp functional-849003:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd927866422/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh -n functional-849003 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh -n functional-849003 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-849003 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-mmmpf" [9353f2ef-2fd5-46c3-924a-5179fe9d026d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-mmmpf" [9353f2ef-2fd5-46c3-924a-5179fe9d026d] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.003339083s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-849003 exec mysql-5bb876957f-mmmpf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-849003 exec mysql-5bb876957f-mmmpf -- mysql -ppassword -e "show databases;": exit status 1 (100.624458ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 16:47:15.338777   11141 retry.go:31] will retry after 1.416633328s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-849003 exec mysql-5bb876957f-mmmpf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/11141/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "sudo cat /etc/test/nested/copy/11141/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/11141.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "sudo cat /etc/ssl/certs/11141.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/11141.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "sudo cat /usr/share/ca-certificates/11141.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/111412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "sudo cat /etc/ssl/certs/111412.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/111412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "sudo cat /usr/share/ca-certificates/111412.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-849003 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 ssh "sudo systemctl is-active docker": exit status 1 (237.461608ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 ssh "sudo systemctl is-active containerd": exit status 1 (240.772133ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "319.789756ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.026313ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "330.138927ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "49.814241ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-849003 /tmp/TestFunctionalparallelMountCmdany-port547068648/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757349988644333571" to /tmp/TestFunctionalparallelMountCmdany-port547068648/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757349988644333571" to /tmp/TestFunctionalparallelMountCmdany-port547068648/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757349988644333571" to /tmp/TestFunctionalparallelMountCmdany-port547068648/001/test-1757349988644333571
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.212449ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 16:46:28.919790   11141 retry.go:31] will retry after 396.26599ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 16:46 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 16:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 16:46 test-1757349988644333571
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh cat /mount-9p/test-1757349988644333571
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-849003 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [39591aec-f4e5-4d37-ba97-de48932321f8] Pending
helpers_test.go:352: "busybox-mount" [39591aec-f4e5-4d37-ba97-de48932321f8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [39591aec-f4e5-4d37-ba97-de48932321f8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [39591aec-f4e5-4d37-ba97-de48932321f8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003636055s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-849003 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-849003 /tmp/TestFunctionalparallelMountCmdany-port547068648/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-849003 /tmp/TestFunctionalparallelMountCmdspecific-port2777218275/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (251.010292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 16:46:36.400411   11141 retry.go:31] will retry after 371.232017ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-849003 /tmp/TestFunctionalparallelMountCmdspecific-port2777218275/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 ssh "sudo umount -f /mount-9p": exit status 1 (254.827668ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-849003 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-849003 /tmp/TestFunctionalparallelMountCmdspecific-port2777218275/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-849003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2167996617/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-849003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2167996617/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-849003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2167996617/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-849003 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-849003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2167996617/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-849003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2167996617/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-849003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2167996617/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-849003 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-849003
localhost/kicbase/echo-server:functional-849003
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-849003 image ls --format short --alsologtostderr:
I0908 16:47:17.409707   54387 out.go:360] Setting OutFile to fd 1 ...
I0908 16:47:17.409959   54387 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:47:17.409968   54387 out.go:374] Setting ErrFile to fd 2...
I0908 16:47:17.409972   54387 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:47:17.410161   54387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
I0908 16:47:17.410693   54387 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:47:17.410776   54387 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:47:17.411108   54387 cli_runner.go:164] Run: docker container inspect functional-849003 --format={{.State.Status}}
I0908 16:47:17.428568   54387 ssh_runner.go:195] Run: systemctl --version
I0908 16:47:17.428611   54387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-849003
I0908 16:47:17.445753   54387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/functional-849003/id_rsa Username:docker}
I0908 16:47:17.530259   54387 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-849003 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/kicbase/echo-server           │ functional-849003  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-849003  │ a8e3afd05bf0b │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/nginx                 │ latest             │ ad5708199ec7d │ 197MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-849003 image ls --format table --alsologtostderr:
I0908 16:47:17.821190   54487 out.go:360] Setting OutFile to fd 1 ...
I0908 16:47:17.821317   54487 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:47:17.821326   54487 out.go:374] Setting ErrFile to fd 2...
I0908 16:47:17.821330   54487 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:47:17.821512   54487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
I0908 16:47:17.822127   54487 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:47:17.822219   54487 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:47:17.822573   54487 cli_runner.go:164] Run: docker container inspect functional-849003 --format={{.State.Status}}
I0908 16:47:17.841137   54487 ssh_runner.go:195] Run: systemctl --version
I0908 16:47:17.841204   54487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-849003
I0908 16:47:17.860678   54487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/functional-849003/id_rsa Username:docker}
I0908 16:47:17.946237   54487 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-849003 image ls --format json --alsologtostderr:
[{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba0805
58","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-849003"],"size":"4943877"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apise
rver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"46169d968e9203e8b10debaf
898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"a8e3afd05bf0be091cc64fc8b28f2c3f7a5a67bca268df0733e8653cbed39ea0","repoDigests":["localhost/minikube-local-cache-test@sha256:c9a779e739fd5ad22fda3f4ae06b344e8c278e71016e131954d4c532ed47c2c4"],"repoTags":["localhost/minikube-local-cache-test:functional-849003"],"size":"3328"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870
f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a
9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57","docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7"],"repoTags":["docker.io/library/nginx:latest"],"size":"196544386"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a571357196
28cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-849003 image ls --format json --alsologtostderr:
I0908 16:47:17.614921   54436 out.go:360] Setting OutFile to fd 1 ...
I0908 16:47:17.615028   54436 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:47:17.615033   54436 out.go:374] Setting ErrFile to fd 2...
I0908 16:47:17.615037   54436 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:47:17.615227   54436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
I0908 16:47:17.615754   54436 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:47:17.615843   54436 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:47:17.616206   54436 cli_runner.go:164] Run: docker container inspect functional-849003 --format={{.State.Status}}
I0908 16:47:17.633522   54436 ssh_runner.go:195] Run: systemctl --version
I0908 16:47:17.633571   54436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-849003
I0908 16:47:17.650205   54436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/functional-849003/id_rsa Username:docker}
I0908 16:47:17.737984   54436 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-849003 image ls --format yaml --alsologtostderr:
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
- docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7
repoTags:
- docker.io/library/nginx:latest
size: "196544386"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a8e3afd05bf0be091cc64fc8b28f2c3f7a5a67bca268df0733e8653cbed39ea0
repoDigests:
- localhost/minikube-local-cache-test@sha256:c9a779e739fd5ad22fda3f4ae06b344e8c278e71016e131954d4c532ed47c2c4
repoTags:
- localhost/minikube-local-cache-test:functional-849003
size: "3328"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-849003
size: "4943877"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-849003 image ls --format yaml --alsologtostderr:
I0908 16:47:18.030443   54536 out.go:360] Setting OutFile to fd 1 ...
I0908 16:47:18.030798   54536 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:47:18.030832   54536 out.go:374] Setting ErrFile to fd 2...
I0908 16:47:18.030839   54536 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:47:18.031325   54536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
I0908 16:47:18.032553   54536 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:47:18.033008   54536 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:47:18.033506   54536 cli_runner.go:164] Run: docker container inspect functional-849003 --format={{.State.Status}}
I0908 16:47:18.050895   54536 ssh_runner.go:195] Run: systemctl --version
I0908 16:47:18.050942   54536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-849003
I0908 16:47:18.068702   54536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/functional-849003/id_rsa Username:docker}
I0908 16:47:18.149959   54536 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-849003 ssh pgrep buildkitd: exit status 1 (241.152028ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image build -t localhost/my-image:functional-849003 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-849003 image build -t localhost/my-image:functional-849003 testdata/build --alsologtostderr: (3.291591925s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-849003 image build -t localhost/my-image:functional-849003 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f217e7a5f47
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-849003
--> 0e76c07ff4a
Successfully tagged localhost/my-image:functional-849003
0e76c07ff4ac870e430bcdea5048458e220c4d95308c35d214cd522611c99c69
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-849003 image build -t localhost/my-image:functional-849003 testdata/build --alsologtostderr:
I0908 16:47:18.473492   54679 out.go:360] Setting OutFile to fd 1 ...
I0908 16:47:18.473639   54679 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:47:18.473649   54679 out.go:374] Setting ErrFile to fd 2...
I0908 16:47:18.473675   54679 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:47:18.473886   54679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
I0908 16:47:18.474479   54679 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:47:18.475137   54679 config.go:182] Loaded profile config "functional-849003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:47:18.475530   54679 cli_runner.go:164] Run: docker container inspect functional-849003 --format={{.State.Status}}
I0908 16:47:18.493241   54679 ssh_runner.go:195] Run: systemctl --version
I0908 16:47:18.493300   54679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-849003
I0908 16:47:18.511580   54679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/functional-849003/id_rsa Username:docker}
I0908 16:47:18.594043   54679 build_images.go:161] Building image from path: /tmp/build.4247317007.tar
I0908 16:47:18.594112   54679 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 16:47:18.603077   54679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4247317007.tar
I0908 16:47:18.606266   54679 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4247317007.tar: stat -c "%s %y" /var/lib/minikube/build/build.4247317007.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4247317007.tar': No such file or directory
I0908 16:47:18.606294   54679 ssh_runner.go:362] scp /tmp/build.4247317007.tar --> /var/lib/minikube/build/build.4247317007.tar (3072 bytes)
I0908 16:47:18.628188   54679 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4247317007
I0908 16:47:18.636542   54679 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4247317007 -xf /var/lib/minikube/build/build.4247317007.tar
I0908 16:47:18.644830   54679 crio.go:315] Building image: /var/lib/minikube/build/build.4247317007
I0908 16:47:18.644890   54679 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-849003 /var/lib/minikube/build/build.4247317007 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0908 16:47:21.697977   54679 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-849003 /var/lib/minikube/build/build.4247317007 --cgroup-manager=cgroupfs: (3.053066364s)
I0908 16:47:21.698035   54679 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4247317007
I0908 16:47:21.707169   54679 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4247317007.tar
I0908 16:47:21.715332   54679 build_images.go:217] Built localhost/my-image:functional-849003 from /tmp/build.4247317007.tar
I0908 16:47:21.715358   54679 build_images.go:133] succeeded building to: functional-849003
I0908 16:47:21.715362   54679 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.690716258s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-849003
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image load --daemon kicbase/echo-server:functional-849003 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-849003 image load --daemon kicbase/echo-server:functional-849003 --alsologtostderr: (1.689782312s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-849003 image ls: (2.328461036s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image load --daemon kicbase/echo-server:functional-849003 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-849003
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image load --daemon kicbase/echo-server:functional-849003 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image save kicbase/echo-server:functional-849003 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image rm kicbase/echo-server:functional-849003 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-849003
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 image save --daemon kicbase/echo-server:functional-849003 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-849003
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-849003 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-849003 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-849003 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-849003 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 53993: os: process already finished
helpers_test.go:519: unable to terminate pid 53802: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-849003 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-849003 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [f724f1d7-ad34-4e5d-a1d7-1bdf41729b65] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [f724f1d7-ad34-4e5d-a1d7-1bdf41729b65] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003768463s
I0908 16:47:27.598235   11141 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-849003 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.140.117 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-849003 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
E0908 16:47:45.857025   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:50:01.992684   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:50:29.698932   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:55:01.991997   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-849003 service list: (1.677919994s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-849003 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-849003 service list -o json: (1.672371515s)
functional_test.go:1504: Took "1.672461084s" to run "out/minikube-linux-amd64 -p functional-849003 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-849003
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-849003
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-849003
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (150.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-833825 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m30.02345901s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (150.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-833825 kubectl -- rollout status deployment/busybox: (5.717868829s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-68f55 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-dksjv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-zvztb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-68f55 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-dksjv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-zvztb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-68f55 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-dksjv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-zvztb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-68f55 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-68f55 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-dksjv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-dksjv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-zvztb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 kubectl -- exec busybox-7b57f96db7-zvztb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 node add --alsologtostderr -v 5
E0908 17:00:01.992687   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-833825 node add --alsologtostderr -v 5: (56.432290528s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-833825 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp testdata/cp-test.txt ha-833825:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3296467464/001/cp-test_ha-833825.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825:/home/docker/cp-test.txt ha-833825-m02:/home/docker/cp-test_ha-833825_ha-833825-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m02 "sudo cat /home/docker/cp-test_ha-833825_ha-833825-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825:/home/docker/cp-test.txt ha-833825-m03:/home/docker/cp-test_ha-833825_ha-833825-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m03 "sudo cat /home/docker/cp-test_ha-833825_ha-833825-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825:/home/docker/cp-test.txt ha-833825-m04:/home/docker/cp-test_ha-833825_ha-833825-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m04 "sudo cat /home/docker/cp-test_ha-833825_ha-833825-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp testdata/cp-test.txt ha-833825-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3296467464/001/cp-test_ha-833825-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m02:/home/docker/cp-test.txt ha-833825:/home/docker/cp-test_ha-833825-m02_ha-833825.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825 "sudo cat /home/docker/cp-test_ha-833825-m02_ha-833825.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m02:/home/docker/cp-test.txt ha-833825-m03:/home/docker/cp-test_ha-833825-m02_ha-833825-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m03 "sudo cat /home/docker/cp-test_ha-833825-m02_ha-833825-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m02:/home/docker/cp-test.txt ha-833825-m04:/home/docker/cp-test_ha-833825-m02_ha-833825-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m04 "sudo cat /home/docker/cp-test_ha-833825-m02_ha-833825-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp testdata/cp-test.txt ha-833825-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3296467464/001/cp-test_ha-833825-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m03:/home/docker/cp-test.txt ha-833825:/home/docker/cp-test_ha-833825-m03_ha-833825.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825 "sudo cat /home/docker/cp-test_ha-833825-m03_ha-833825.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m03:/home/docker/cp-test.txt ha-833825-m02:/home/docker/cp-test_ha-833825-m03_ha-833825-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m02 "sudo cat /home/docker/cp-test_ha-833825-m03_ha-833825-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m03:/home/docker/cp-test.txt ha-833825-m04:/home/docker/cp-test_ha-833825-m03_ha-833825-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m04 "sudo cat /home/docker/cp-test_ha-833825-m03_ha-833825-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp testdata/cp-test.txt ha-833825-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3296467464/001/cp-test_ha-833825-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m04:/home/docker/cp-test.txt ha-833825:/home/docker/cp-test_ha-833825-m04_ha-833825.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825 "sudo cat /home/docker/cp-test_ha-833825-m04_ha-833825.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m04:/home/docker/cp-test.txt ha-833825-m02:/home/docker/cp-test_ha-833825-m04_ha-833825-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m02 "sudo cat /home/docker/cp-test_ha-833825-m04_ha-833825-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 cp ha-833825-m04:/home/docker/cp-test.txt ha-833825-m03:/home/docker/cp-test_ha-833825-m04_ha-833825-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 ssh -n ha-833825-m03 "sudo cat /home/docker/cp-test_ha-833825-m04_ha-833825-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-833825 node stop m02 --alsologtostderr -v 5: (11.880803644s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-833825 status --alsologtostderr -v 5: exit status 7 (655.525174ms)

                                                
                                                
-- stdout --
	ha-833825
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-833825-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833825-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-833825-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 17:00:50.099222   80094 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:00:50.099467   80094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:00:50.099477   80094 out.go:374] Setting ErrFile to fd 2...
	I0908 17:00:50.099493   80094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:00:50.099684   80094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
	I0908 17:00:50.099869   80094 out.go:368] Setting JSON to false
	I0908 17:00:50.099902   80094 mustload.go:65] Loading cluster: ha-833825
	I0908 17:00:50.099962   80094 notify.go:220] Checking for updates...
	I0908 17:00:50.100337   80094 config.go:182] Loaded profile config "ha-833825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:00:50.100362   80094 status.go:174] checking status of ha-833825 ...
	I0908 17:00:50.100858   80094 cli_runner.go:164] Run: docker container inspect ha-833825 --format={{.State.Status}}
	I0908 17:00:50.119465   80094 status.go:371] ha-833825 host status = "Running" (err=<nil>)
	I0908 17:00:50.119490   80094 host.go:66] Checking if "ha-833825" exists ...
	I0908 17:00:50.119757   80094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-833825
	I0908 17:00:50.138677   80094 host.go:66] Checking if "ha-833825" exists ...
	I0908 17:00:50.138958   80094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 17:00:50.139008   80094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-833825
	I0908 17:00:50.158573   80094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/ha-833825/id_rsa Username:docker}
	I0908 17:00:50.246838   80094 ssh_runner.go:195] Run: systemctl --version
	I0908 17:00:50.250876   80094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 17:00:50.261585   80094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 17:00:50.313717   80094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 17:00:50.303953479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 17:00:50.314456   80094 kubeconfig.go:125] found "ha-833825" server: "https://192.168.49.254:8443"
	I0908 17:00:50.314490   80094 api_server.go:166] Checking apiserver status ...
	I0908 17:00:50.314529   80094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:00:50.325930   80094 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	I0908 17:00:50.335195   80094 api_server.go:182] apiserver freezer: "2:freezer:/docker/2b4046f1105541cff25bcdecd09b958ab08a7c58fb9fe021cbe56fdafae06b68/crio/crio-237d9d3dea3a900907167920a8a59901976142d3a075c49162bd6242c6b6b14f"
	I0908 17:00:50.335252   80094 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2b4046f1105541cff25bcdecd09b958ab08a7c58fb9fe021cbe56fdafae06b68/crio/crio-237d9d3dea3a900907167920a8a59901976142d3a075c49162bd6242c6b6b14f/freezer.state
	I0908 17:00:50.343242   80094 api_server.go:204] freezer state: "THAWED"
	I0908 17:00:50.343270   80094 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 17:00:50.347500   80094 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 17:00:50.347523   80094 status.go:463] ha-833825 apiserver status = Running (err=<nil>)
	I0908 17:00:50.347534   80094 status.go:176] ha-833825 status: &{Name:ha-833825 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:00:50.347547   80094 status.go:174] checking status of ha-833825-m02 ...
	I0908 17:00:50.347775   80094 cli_runner.go:164] Run: docker container inspect ha-833825-m02 --format={{.State.Status}}
	I0908 17:00:50.364766   80094 status.go:371] ha-833825-m02 host status = "Stopped" (err=<nil>)
	I0908 17:00:50.364789   80094 status.go:384] host is not running, skipping remaining checks
	I0908 17:00:50.364794   80094 status.go:176] ha-833825-m02 status: &{Name:ha-833825-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:00:50.364824   80094 status.go:174] checking status of ha-833825-m03 ...
	I0908 17:00:50.365061   80094 cli_runner.go:164] Run: docker container inspect ha-833825-m03 --format={{.State.Status}}
	I0908 17:00:50.383447   80094 status.go:371] ha-833825-m03 host status = "Running" (err=<nil>)
	I0908 17:00:50.383472   80094 host.go:66] Checking if "ha-833825-m03" exists ...
	I0908 17:00:50.383761   80094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-833825-m03
	I0908 17:00:50.401391   80094 host.go:66] Checking if "ha-833825-m03" exists ...
	I0908 17:00:50.401689   80094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 17:00:50.401728   80094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-833825-m03
	I0908 17:00:50.418562   80094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/ha-833825-m03/id_rsa Username:docker}
	I0908 17:00:50.506720   80094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 17:00:50.517437   80094 kubeconfig.go:125] found "ha-833825" server: "https://192.168.49.254:8443"
	I0908 17:00:50.517463   80094 api_server.go:166] Checking apiserver status ...
	I0908 17:00:50.517490   80094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:00:50.527745   80094 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup
	I0908 17:00:50.536781   80094 api_server.go:182] apiserver freezer: "2:freezer:/docker/984f389d8c24454ef64cee8374b40d76be188d1ea8d99a16be5b8c0aecee2a4f/crio/crio-38abe9f79e0fd82e032fcb1adf928570ca1f2ea32a336a6f4c85b2da495427c1"
	I0908 17:00:50.536855   80094 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/984f389d8c24454ef64cee8374b40d76be188d1ea8d99a16be5b8c0aecee2a4f/crio/crio-38abe9f79e0fd82e032fcb1adf928570ca1f2ea32a336a6f4c85b2da495427c1/freezer.state
	I0908 17:00:50.545394   80094 api_server.go:204] freezer state: "THAWED"
	I0908 17:00:50.545432   80094 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 17:00:50.549694   80094 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 17:00:50.549728   80094 status.go:463] ha-833825-m03 apiserver status = Running (err=<nil>)
	I0908 17:00:50.549738   80094 status.go:176] ha-833825-m03 status: &{Name:ha-833825-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:00:50.549753   80094 status.go:174] checking status of ha-833825-m04 ...
	I0908 17:00:50.550009   80094 cli_runner.go:164] Run: docker container inspect ha-833825-m04 --format={{.State.Status}}
	I0908 17:00:50.569108   80094 status.go:371] ha-833825-m04 host status = "Running" (err=<nil>)
	I0908 17:00:50.569129   80094 host.go:66] Checking if "ha-833825-m04" exists ...
	I0908 17:00:50.569439   80094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-833825-m04
	I0908 17:00:50.587814   80094 host.go:66] Checking if "ha-833825-m04" exists ...
	I0908 17:00:50.588051   80094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 17:00:50.588086   80094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-833825-m04
	I0908 17:00:50.606563   80094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/ha-833825-m04/id_rsa Username:docker}
	I0908 17:00:50.694584   80094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 17:00:50.705616   80094 status.go:176] ha-833825-m04 status: &{Name:ha-833825-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-833825 node start m02 --alsologtostderr -v 5: (21.143276001s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (137.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 stop --alsologtostderr -v 5
E0908 17:01:25.062991   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:27.154895   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:27.161282   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:27.172734   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:27.194139   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:27.235639   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:27.317497   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:27.479019   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:27.800700   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:28.442806   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:29.724953   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:32.286535   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:01:37.408683   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-833825 stop --alsologtostderr -v 5: (26.463619523s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 start --wait true --alsologtostderr -v 5
E0908 17:01:47.650673   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:02:08.132238   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:02:49.094664   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-833825 start --wait true --alsologtostderr -v 5: (1m51.429967319s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (137.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-833825 node delete m03 --alsologtostderr -v 5: (12.627245693s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 stop --alsologtostderr -v 5
E0908 17:04:11.016228   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-833825 stop --alsologtostderr -v 5: (35.451617325s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-833825 status --alsologtostderr -v 5: exit status 7 (107.003999ms)

                                                
                                                
-- stdout --
	ha-833825
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833825-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833825-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 17:04:21.802631   96982 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:04:21.802771   96982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:04:21.802782   96982 out.go:374] Setting ErrFile to fd 2...
	I0908 17:04:21.802786   96982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:04:21.803014   96982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
	I0908 17:04:21.803218   96982 out.go:368] Setting JSON to false
	I0908 17:04:21.803251   96982 mustload.go:65] Loading cluster: ha-833825
	I0908 17:04:21.803311   96982 notify.go:220] Checking for updates...
	I0908 17:04:21.803720   96982 config.go:182] Loaded profile config "ha-833825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:04:21.803740   96982 status.go:174] checking status of ha-833825 ...
	I0908 17:04:21.804192   96982 cli_runner.go:164] Run: docker container inspect ha-833825 --format={{.State.Status}}
	I0908 17:04:21.822929   96982 status.go:371] ha-833825 host status = "Stopped" (err=<nil>)
	I0908 17:04:21.822955   96982 status.go:384] host is not running, skipping remaining checks
	I0908 17:04:21.822966   96982 status.go:176] ha-833825 status: &{Name:ha-833825 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:04:21.823001   96982 status.go:174] checking status of ha-833825-m02 ...
	I0908 17:04:21.823385   96982 cli_runner.go:164] Run: docker container inspect ha-833825-m02 --format={{.State.Status}}
	I0908 17:04:21.843925   96982 status.go:371] ha-833825-m02 host status = "Stopped" (err=<nil>)
	I0908 17:04:21.843975   96982 status.go:384] host is not running, skipping remaining checks
	I0908 17:04:21.843985   96982 status.go:176] ha-833825-m02 status: &{Name:ha-833825-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:04:21.844011   96982 status.go:174] checking status of ha-833825-m04 ...
	I0908 17:04:21.844282   96982 cli_runner.go:164] Run: docker container inspect ha-833825-m04 --format={{.State.Status}}
	I0908 17:04:21.862749   96982 status.go:371] ha-833825-m04 host status = "Stopped" (err=<nil>)
	I0908 17:04:21.862772   96982 status.go:384] host is not running, skipping remaining checks
	I0908 17:04:21.862778   96982 status.go:176] ha-833825-m04 status: &{Name:ha-833825-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (61.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0908 17:05:01.991943   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-833825 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m0.547433203s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (61.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (33.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-833825 node add --control-plane --alsologtostderr -v 5: (32.891947973s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-833825 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (33.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-996210 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0908 17:06:27.154885   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:06:54.857877   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-996210 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m9.117223711s)
--- PASS: TestJSONOutput/start/Command (69.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-996210 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-996210 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-996210 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-996210 --output=json --user=testUser: (5.776022925s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-638384 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-638384 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (63.188341ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bd7a0891-848b-4d74-865c-5e621548da4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-638384] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"def95d38-746f-4250-bf50-4271cdaf2f5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21504"}}
	{"specversion":"1.0","id":"f81dd120-3df9-4e57-aae2-39c7fad59464","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"118ecce5-5ae3-4d00-ba00-128992985ecd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig"}}
	{"specversion":"1.0","id":"b6f753b0-c8a5-4fa3-b54a-49d66c39f974","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube"}}
	{"specversion":"1.0","id":"db2aca4c-eedd-474b-9b7c-c68a78e3df04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f99c5816-f96c-4413-8b8d-2ebef102a73c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0aa0b194-2ca8-48a0-9f4d-a2bbbaa030de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-638384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-638384
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-094998 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-094998 --network=: (35.125649352s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-094998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-094998
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-094998: (2.105659153s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.25s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-121454 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-121454 --network=bridge: (24.546656552s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-121454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-121454
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-121454: (1.958970991s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.52s)

                                                
                                    
x
+
TestKicExistingNetwork (25.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 17:08:30.420659   11141 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 17:08:30.437922   11141 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 17:08:30.438015   11141 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 17:08:30.438035   11141 cli_runner.go:164] Run: docker network inspect existing-network
W0908 17:08:30.455336   11141 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 17:08:30.455372   11141 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 17:08:30.455387   11141 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 17:08:30.455547   11141 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 17:08:30.476473   11141 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-251f5f11407b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:c5:23:62:86:6f} reservation:<nil>}
I0908 17:08:30.476838   11141 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013e10}
I0908 17:08:30.476866   11141 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 17:08:30.476905   11141 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 17:08:30.531784   11141 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-805552 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-805552 --network=existing-network: (23.164827874s)
helpers_test.go:175: Cleaning up "existing-network-805552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-805552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-805552: (1.960151699s)
I0908 17:08:55.676323   11141 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.27s)

                                                
                                    
x
+
TestKicCustomSubnet (29.37s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-319111 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-319111 --subnet=192.168.60.0/24: (27.269400007s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-319111 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-319111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-319111
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-319111: (2.076690637s)
--- PASS: TestKicCustomSubnet (29.37s)

                                                
                                    
x
+
TestKicStaticIP (24.47s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-289326 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-289326 --static-ip=192.168.200.200: (22.273509653s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-289326 ip
helpers_test.go:175: Cleaning up "static-ip-289326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-289326
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-289326: (2.071238968s)
--- PASS: TestKicStaticIP (24.47s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (55.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-191642 --driver=docker  --container-runtime=crio
E0908 17:10:01.995082   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-191642 --driver=docker  --container-runtime=crio: (24.768683736s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-202086 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-202086 --driver=docker  --container-runtime=crio: (26.283190555s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-191642
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-202086
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-202086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-202086
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-202086: (1.806926929s)
helpers_test.go:175: Cleaning up "first-191642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-191642
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-191642: (1.833804894s)
--- PASS: TestMinikubeProfile (55.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-859019 --memory=3072 --mount-string /tmp/TestMountStartserial1779505570/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-859019 --memory=3072 --mount-string /tmp/TestMountStartserial1779505570/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.39472803s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-859019 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-870953 --memory=3072 --mount-string /tmp/TestMountStartserial1779505570/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-870953 --memory=3072 --mount-string /tmp/TestMountStartserial1779505570/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.586013486s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-870953 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-859019 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-859019 --alsologtostderr -v=5: (1.594769908s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-870953 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-870953
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-870953: (1.183241048s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-870953
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-870953: (7.213694479s)
--- PASS: TestMountStart/serial/RestartStopped (8.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-870953 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (128.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-612081 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0908 17:11:27.154677   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-612081 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m7.65799681s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (128.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-612081 -- rollout status deployment/busybox: (4.194954324s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- exec busybox-7b57f96db7-p6c9f -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- exec busybox-7b57f96db7-z5ghj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- exec busybox-7b57f96db7-p6c9f -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- exec busybox-7b57f96db7-z5ghj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- exec busybox-7b57f96db7-p6c9f -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- exec busybox-7b57f96db7-z5ghj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- exec busybox-7b57f96db7-p6c9f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- exec busybox-7b57f96db7-p6c9f -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- exec busybox-7b57f96db7-z5ghj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-612081 -- exec busybox-7b57f96db7-z5ghj -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-612081 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-612081 -v=5 --alsologtostderr: (52.181167057s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.76s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-612081 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp testdata/cp-test.txt multinode-612081:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp multinode-612081:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2220323845/001/cp-test_multinode-612081.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp multinode-612081:/home/docker/cp-test.txt multinode-612081-m02:/home/docker/cp-test_multinode-612081_multinode-612081-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m02 "sudo cat /home/docker/cp-test_multinode-612081_multinode-612081-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp multinode-612081:/home/docker/cp-test.txt multinode-612081-m03:/home/docker/cp-test_multinode-612081_multinode-612081-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m03 "sudo cat /home/docker/cp-test_multinode-612081_multinode-612081-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp testdata/cp-test.txt multinode-612081-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp multinode-612081-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2220323845/001/cp-test_multinode-612081-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp multinode-612081-m02:/home/docker/cp-test.txt multinode-612081:/home/docker/cp-test_multinode-612081-m02_multinode-612081.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081 "sudo cat /home/docker/cp-test_multinode-612081-m02_multinode-612081.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp multinode-612081-m02:/home/docker/cp-test.txt multinode-612081-m03:/home/docker/cp-test_multinode-612081-m02_multinode-612081-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m03 "sudo cat /home/docker/cp-test_multinode-612081-m02_multinode-612081-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp testdata/cp-test.txt multinode-612081-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp multinode-612081-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2220323845/001/cp-test_multinode-612081-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp multinode-612081-m03:/home/docker/cp-test.txt multinode-612081:/home/docker/cp-test_multinode-612081-m03_multinode-612081.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081 "sudo cat /home/docker/cp-test_multinode-612081-m03_multinode-612081.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 cp multinode-612081-m03:/home/docker/cp-test.txt multinode-612081-m02:/home/docker/cp-test_multinode-612081-m03_multinode-612081-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 ssh -n multinode-612081-m02 "sudo cat /home/docker/cp-test_multinode-612081-m03_multinode-612081-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-612081 node stop m03: (1.176420912s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-612081 status: exit status 7 (451.945434ms)

                                                
                                                
-- stdout --
	multinode-612081
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-612081-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-612081-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-612081 status --alsologtostderr: exit status 7 (451.183732ms)

                                                
                                                
-- stdout --
	multinode-612081
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-612081-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-612081-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 17:14:31.495891  161687 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:14:31.496141  161687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:14:31.496151  161687 out.go:374] Setting ErrFile to fd 2...
	I0908 17:14:31.496158  161687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:14:31.496344  161687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
	I0908 17:14:31.496539  161687 out.go:368] Setting JSON to false
	I0908 17:14:31.496572  161687 mustload.go:65] Loading cluster: multinode-612081
	I0908 17:14:31.496726  161687 notify.go:220] Checking for updates...
	I0908 17:14:31.496974  161687 config.go:182] Loaded profile config "multinode-612081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:14:31.496995  161687 status.go:174] checking status of multinode-612081 ...
	I0908 17:14:31.497474  161687 cli_runner.go:164] Run: docker container inspect multinode-612081 --format={{.State.Status}}
	I0908 17:14:31.515640  161687 status.go:371] multinode-612081 host status = "Running" (err=<nil>)
	I0908 17:14:31.515674  161687 host.go:66] Checking if "multinode-612081" exists ...
	I0908 17:14:31.515945  161687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-612081
	I0908 17:14:31.534316  161687 host.go:66] Checking if "multinode-612081" exists ...
	I0908 17:14:31.534743  161687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 17:14:31.534789  161687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-612081
	I0908 17:14:31.552621  161687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/multinode-612081/id_rsa Username:docker}
	I0908 17:14:31.638500  161687 ssh_runner.go:195] Run: systemctl --version
	I0908 17:14:31.642352  161687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 17:14:31.652782  161687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 17:14:31.699601  161687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-09-08 17:14:31.690759031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 17:14:31.700286  161687 kubeconfig.go:125] found "multinode-612081" server: "https://192.168.67.2:8443"
	I0908 17:14:31.700317  161687 api_server.go:166] Checking apiserver status ...
	I0908 17:14:31.700358  161687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:14:31.710714  161687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1548/cgroup
	I0908 17:14:31.719337  161687 api_server.go:182] apiserver freezer: "2:freezer:/docker/10cdf47a741f32cffa1b56b516f3e1772f34ac7834ba1d878be8f745ac1438ef/crio/crio-806af72d1e7c6d0a25252bd68e201d8eb339b755c0bfdcd03194dc08996f87d9"
	I0908 17:14:31.719417  161687 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/10cdf47a741f32cffa1b56b516f3e1772f34ac7834ba1d878be8f745ac1438ef/crio/crio-806af72d1e7c6d0a25252bd68e201d8eb339b755c0bfdcd03194dc08996f87d9/freezer.state
	I0908 17:14:31.726952  161687 api_server.go:204] freezer state: "THAWED"
	I0908 17:14:31.726981  161687 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 17:14:31.730924  161687 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 17:14:31.730944  161687 status.go:463] multinode-612081 apiserver status = Running (err=<nil>)
	I0908 17:14:31.730953  161687 status.go:176] multinode-612081 status: &{Name:multinode-612081 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:14:31.730967  161687 status.go:174] checking status of multinode-612081-m02 ...
	I0908 17:14:31.731175  161687 cli_runner.go:164] Run: docker container inspect multinode-612081-m02 --format={{.State.Status}}
	I0908 17:14:31.748263  161687 status.go:371] multinode-612081-m02 host status = "Running" (err=<nil>)
	I0908 17:14:31.748287  161687 host.go:66] Checking if "multinode-612081-m02" exists ...
	I0908 17:14:31.748537  161687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-612081-m02
	I0908 17:14:31.765615  161687 host.go:66] Checking if "multinode-612081-m02" exists ...
	I0908 17:14:31.765913  161687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 17:14:31.765956  161687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-612081-m02
	I0908 17:14:31.785049  161687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21504-7450/.minikube/machines/multinode-612081-m02/id_rsa Username:docker}
	I0908 17:14:31.870726  161687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 17:14:31.881415  161687 status.go:176] multinode-612081-m02 status: &{Name:multinode-612081-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:14:31.881458  161687 status.go:174] checking status of multinode-612081-m03 ...
	I0908 17:14:31.881737  161687 cli_runner.go:164] Run: docker container inspect multinode-612081-m03 --format={{.State.Status}}
	I0908 17:14:31.899411  161687 status.go:371] multinode-612081-m03 host status = "Stopped" (err=<nil>)
	I0908 17:14:31.899443  161687 status.go:384] host is not running, skipping remaining checks
	I0908 17:14:31.899451  161687 status.go:176] multinode-612081-m03 status: &{Name:multinode-612081-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-612081 node start m03 -v=5 --alsologtostderr: (6.355912416s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (71.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-612081
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-612081
E0908 17:15:01.991620   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-612081: (24.721139944s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-612081 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-612081 --wait=true -v=5 --alsologtostderr: (46.925538812s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-612081
--- PASS: TestMultiNode/serial/RestartKeepsNodes (71.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-612081 node delete m03: (4.619602779s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-612081 stop: (23.544849176s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-612081 status: exit status 7 (82.885431ms)

                                                
                                                
-- stdout --
	multinode-612081
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-612081-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-612081 status --alsologtostderr: exit status 7 (83.984771ms)

                                                
                                                
-- stdout --
	multinode-612081
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-612081-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 17:16:19.499321  171285 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:16:19.499439  171285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:16:19.499444  171285 out.go:374] Setting ErrFile to fd 2...
	I0908 17:16:19.499448  171285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:16:19.499648  171285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
	I0908 17:16:19.499822  171285 out.go:368] Setting JSON to false
	I0908 17:16:19.499851  171285 mustload.go:65] Loading cluster: multinode-612081
	I0908 17:16:19.500020  171285 notify.go:220] Checking for updates...
	I0908 17:16:19.500227  171285 config.go:182] Loaded profile config "multinode-612081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:16:19.500244  171285 status.go:174] checking status of multinode-612081 ...
	I0908 17:16:19.500659  171285 cli_runner.go:164] Run: docker container inspect multinode-612081 --format={{.State.Status}}
	I0908 17:16:19.518424  171285 status.go:371] multinode-612081 host status = "Stopped" (err=<nil>)
	I0908 17:16:19.518456  171285 status.go:384] host is not running, skipping remaining checks
	I0908 17:16:19.518464  171285 status.go:176] multinode-612081 status: &{Name:multinode-612081 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:16:19.518495  171285 status.go:174] checking status of multinode-612081-m02 ...
	I0908 17:16:19.518769  171285 cli_runner.go:164] Run: docker container inspect multinode-612081-m02 --format={{.State.Status}}
	I0908 17:16:19.536400  171285 status.go:371] multinode-612081-m02 host status = "Stopped" (err=<nil>)
	I0908 17:16:19.536449  171285 status.go:384] host is not running, skipping remaining checks
	I0908 17:16:19.536459  171285 status.go:176] multinode-612081-m02 status: &{Name:multinode-612081-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-612081 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0908 17:16:27.154988   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-612081 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.125315207s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-612081 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-612081
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-612081-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-612081-m02 --driver=docker  --container-runtime=crio: exit status 14 (66.050065ms)

                                                
                                                
-- stdout --
	* [multinode-612081-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-612081-m02' is duplicated with machine name 'multinode-612081-m02' in profile 'multinode-612081'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-612081-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-612081-m03 --driver=docker  --container-runtime=crio: (24.130721227s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-612081
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-612081: exit status 80 (264.231486ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-612081 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-612081-m03 already exists in multinode-612081-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-612081-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-612081-m03: (1.852967926s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.36s)

                                                
                                    
x
+
TestPreload (118.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-786360 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0908 17:17:50.220342   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:18:05.065840   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-786360 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (51.785979175s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-786360 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-786360 image pull gcr.io/k8s-minikube/busybox: (3.3430912s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-786360
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-786360: (5.65612842s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-786360 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-786360 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (55.647768349s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-786360 image list
helpers_test.go:175: Cleaning up "test-preload-786360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-786360
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-786360: (2.246337946s)
--- PASS: TestPreload (118.89s)

                                                
                                    
x
+
TestScheduledStopUnix (100.47s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-179953 --memory=3072 --driver=docker  --container-runtime=crio
E0908 17:20:01.992122   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-179953 --memory=3072 --driver=docker  --container-runtime=crio: (25.12978228s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179953 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-179953 -n scheduled-stop-179953
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179953 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 17:20:02.869492   11141 retry.go:31] will retry after 117.755µs: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.870673   11141 retry.go:31] will retry after 160.084µs: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.871823   11141 retry.go:31] will retry after 189.642µs: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.872971   11141 retry.go:31] will retry after 279.23µs: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.874097   11141 retry.go:31] will retry after 455.737µs: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.875221   11141 retry.go:31] will retry after 1.08036ms: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.876352   11141 retry.go:31] will retry after 578.562µs: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.877474   11141 retry.go:31] will retry after 2.261387ms: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.880675   11141 retry.go:31] will retry after 2.345932ms: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.883891   11141 retry.go:31] will retry after 4.18676ms: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.889109   11141 retry.go:31] will retry after 3.731001ms: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.893331   11141 retry.go:31] will retry after 9.893257ms: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.903564   11141 retry.go:31] will retry after 14.261196ms: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.918814   11141 retry.go:31] will retry after 15.774583ms: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
I0908 17:20:02.935099   11141 retry.go:31] will retry after 37.564254ms: open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/scheduled-stop-179953/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179953 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179953 -n scheduled-stop-179953
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-179953
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179953 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-179953
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-179953: exit status 7 (68.067156ms)

                                                
                                                
-- stdout --
	scheduled-stop-179953
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179953 -n scheduled-stop-179953
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179953 -n scheduled-stop-179953: exit status 7 (66.861075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-179953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-179953
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-179953: (4.051870665s)
--- PASS: TestScheduledStopUnix (100.47s)

                                                
                                    
x
+
TestInsufficientStorage (12.28s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-548926 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E0908 17:21:27.158955   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-548926 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.957074718s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"929a52d8-7b30-4433-953b-a49a559e780b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-548926] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5dbb73a4-bfa9-4890-a3bd-93d8d6ae11db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21504"}}
	{"specversion":"1.0","id":"31d6bd7a-2e7f-4dbf-9034-39765f9286f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d5d17595-7e88-4ce3-81c2-93b2ddf509da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig"}}
	{"specversion":"1.0","id":"8bef305c-6169-4037-9917-edd28f27cfd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube"}}
	{"specversion":"1.0","id":"29216bc9-7427-4d9f-990d-8fd4f4c93337","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"50a55b1e-4447-44b2-ab56-8577c5c9c300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"68006615-8891-4be5-8c2d-f6dce95d4df8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c6d992d4-e212-4b73-993c-c6b77c3792ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c11d9f90-f1da-4c45-a1af-87cae076f229","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"65506d59-4e06-4cce-8591-f5a9c52efff8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d1cd786b-6ba6-4b9c-99ec-799a4cac1bac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-548926\" primary control-plane node in \"insufficient-storage-548926\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"faab550e-2f46-405e-8517-febbd634939d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b36dd5bd-bc83-412c-b61c-a163c2dca295","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad2e7dd1-c171-4d1e-b5ff-710f3702523a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-548926 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-548926 --output=json --layout=cluster: exit status 7 (254.918995ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-548926","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-548926","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 17:21:28.022798  193600 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-548926" does not appear in /home/jenkins/minikube-integration/21504-7450/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-548926 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-548926 --output=json --layout=cluster: exit status 7 (254.00248ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-548926","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-548926","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 17:21:28.277610  193704 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-548926" does not appear in /home/jenkins/minikube-integration/21504-7450/kubeconfig
	E0908 17:21:28.287472  193704 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/insufficient-storage-548926/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-548926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-548926
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-548926: (1.80845715s)
--- PASS: TestInsufficientStorage (12.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (41.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3415208912 start -p running-upgrade-254371 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3415208912 start -p running-upgrade-254371 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.534394051s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-254371 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-254371 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (13.040047833s)
helpers_test.go:175: Cleaning up "running-upgrade-254371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-254371
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-254371: (2.129059794s)
--- PASS: TestRunningBinaryUpgrade (41.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (351.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069462 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-069462 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.251894208s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-069462
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-069462: (1.230919593s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-069462 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-069462 status --format={{.Host}}: exit status 7 (77.647293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069462 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-069462 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.090636555s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-069462 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069462 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-069462 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (89.961307ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-069462] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-069462
	    minikube start -p kubernetes-upgrade-069462 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0694622 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-069462 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069462 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-069462 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.599525961s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-069462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-069462
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-069462: (2.817423537s)
--- PASS: TestKubernetesUpgrade (351.24s)

                                                
                                    
x
+
TestMissingContainerUpgrade (94.73s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1444520304 start -p missing-upgrade-117058 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1444520304 start -p missing-upgrade-117058 --memory=3072 --driver=docker  --container-runtime=crio: (49.977230171s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-117058
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-117058: (2.830341883s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-117058
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-117058 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-117058 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.907137995s)
helpers_test.go:175: Cleaning up "missing-upgrade-117058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-117058
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-117058: (2.341078679s)
--- PASS: TestMissingContainerUpgrade (94.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (68.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2840004824 start -p stopped-upgrade-320978 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2840004824 start -p stopped-upgrade-320978 --memory=3072 --vm-driver=docker  --container-runtime=crio: (52.809708333s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2840004824 -p stopped-upgrade-320978 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2840004824 -p stopped-upgrade-320978 stop: (1.219958745s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-320978 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-320978 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.66947028s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (68.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-320978
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-320978: (1.000685131s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestPause/serial/Start (73.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-073444 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-073444 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m13.476686749s)
--- PASS: TestPause/serial/Start (73.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-172062 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-172062 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (83.504102ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-172062] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (27.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-172062 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-172062 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.657789895s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-172062 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (27.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-589911 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-589911 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (147.221321ms)

                                                
                                                
-- stdout --
	* [false-589911] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 17:23:31.279724  225330 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:23:31.280259  225330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:23:31.280275  225330 out.go:374] Setting ErrFile to fd 2...
	I0908 17:23:31.280281  225330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:23:31.280753  225330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7450/.minikube/bin
	I0908 17:23:31.281371  225330 out.go:368] Setting JSON to false
	I0908 17:23:31.282554  225330 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3955,"bootTime":1757348256,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 17:23:31.282607  225330 start.go:140] virtualization: kvm guest
	I0908 17:23:31.284929  225330 out.go:179] * [false-589911] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 17:23:31.286473  225330 notify.go:220] Checking for updates...
	I0908 17:23:31.286505  225330 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 17:23:31.287954  225330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 17:23:31.289248  225330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7450/kubeconfig
	I0908 17:23:31.290546  225330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7450/.minikube
	I0908 17:23:31.291806  225330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 17:23:31.292995  225330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 17:23:31.294764  225330 config.go:182] Loaded profile config "NoKubernetes-172062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:23:31.294913  225330 config.go:182] Loaded profile config "kubernetes-upgrade-069462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:23:31.295039  225330 config.go:182] Loaded profile config "pause-073444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:23:31.295146  225330 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 17:23:31.319361  225330 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 17:23:31.319464  225330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 17:23:31.369578  225330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:75 SystemTime:2025-09-08 17:23:31.359686126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 17:23:31.369768  225330 docker.go:318] overlay module found
	I0908 17:23:31.372754  225330 out.go:179] * Using the docker driver based on user configuration
	I0908 17:23:31.374113  225330 start.go:304] selected driver: docker
	I0908 17:23:31.374145  225330 start.go:918] validating driver "docker" against <nil>
	I0908 17:23:31.374157  225330 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 17:23:31.376365  225330 out.go:203] 
	W0908 17:23:31.377653  225330 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0908 17:23:31.379194  225330 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-589911 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-589911" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-589911" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-7450/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:23:30 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-172062
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-7450/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:22:23 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-069462
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-7450/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:23:31 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-073444
contexts:
- context:
cluster: NoKubernetes-172062
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:23:30 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: NoKubernetes-172062
name: NoKubernetes-172062
- context:
cluster: kubernetes-upgrade-069462
user: kubernetes-upgrade-069462
name: kubernetes-upgrade-069462
- context:
cluster: pause-073444
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:23:31 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-073444
name: pause-073444
current-context: pause-073444
kind: Config
preferences: {}
users:
- name: NoKubernetes-172062
user:
client-certificate: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/NoKubernetes-172062/client.crt
client-key: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/NoKubernetes-172062/client.key
- name: kubernetes-upgrade-069462
user:
client-certificate: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/kubernetes-upgrade-069462/client.crt
client-key: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/kubernetes-upgrade-069462/client.key
- name: pause-073444
user:
client-certificate: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/pause-073444/client.crt
client-key: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/pause-073444/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-589911

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-589911"

                                                
                                                
----------------------- debugLogs end: false-589911 [took: 3.216767848s] --------------------------------
helpers_test.go:175: Cleaning up "false-589911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-589911
--- PASS: TestNetworkPlugins/group/false (3.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-172062 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-172062 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.054758653s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-172062 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-172062 status -o json: exit status 2 (287.562345ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-172062","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-172062
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-172062: (1.945114706s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-172062 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-172062 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.946516659s)
--- PASS: TestNoKubernetes/serial/Start (7.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-172062 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-172062 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.781053ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (15.206512645s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.238386935s)
--- PASS: TestNoKubernetes/serial/ProfileList (16.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-172062
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-172062: (1.189910513s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-172062 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-172062 --driver=docker  --container-runtime=crio: (7.25031763s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-172062 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-172062 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.356621ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (18.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-073444 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-073444 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.158074396s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (18.18s)

                                                
                                    
x
+
TestPause/serial/Pause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-073444 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-073444 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-073444 --output=json --layout=cluster: exit status 2 (325.97202ms)

                                                
                                                
-- stdout --
	{"Name":"pause-073444","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-073444","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-073444 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-073444 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-073444 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-073444 --alsologtostderr -v=5: (2.791319848s)
--- PASS: TestPause/serial/DeletePaused (2.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.064532938s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-073444
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-073444: exit status 1 (21.640282ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-073444: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (54.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-060127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0908 17:25:01.991974   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-060127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (54.078299856s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (54.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-905445 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-905445 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (56.96835043s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-060127 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5fcc15c0-ab7c-435b-9391-38f4838a8013] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5fcc15c0-ab7c-435b-9391-38f4838a8013] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003748493s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-060127 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-060127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-060127 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-060127 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-060127 --alsologtostderr -v=3: (12.014972703s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-905445 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8d167d07-2f51-4b30-bd1c-0395b7efbf6a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8d167d07-2f51-4b30-bd1c-0395b7efbf6a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003949122s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-905445 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-060127 -n old-k8s-version-060127
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-060127 -n old-k8s-version-060127: exit status 7 (68.053115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-060127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-060127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-060127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (45.914637093s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-060127 -n old-k8s-version-060127
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-905445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-905445 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-905445 --alsologtostderr -v=3
E0908 17:26:27.154753   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-905445 --alsologtostderr -v=3: (11.920954297s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-905445 -n no-preload-905445
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-905445 -n no-preload-905445: exit status 7 (70.246163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-905445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-905445 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-905445 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (48.120991117s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-905445 -n no-preload-905445
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-cxn4g" [e2abf3cc-d906-4ccf-b8b8-3dcdb3e5fa34] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003094908s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-cxn4g" [e2abf3cc-d906-4ccf-b8b8-3dcdb3e5fa34] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003815615s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-060127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-060127 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-060127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-060127 -n old-k8s-version-060127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-060127 -n old-k8s-version-060127: exit status 2 (288.984953ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-060127 -n old-k8s-version-060127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-060127 -n old-k8s-version-060127: exit status 2 (291.5439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-060127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-060127 -n old-k8s-version-060127
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-060127 -n old-k8s-version-060127
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-200185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-200185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m17.137201364s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xm9zf" [088bc5e1-34b7-4f89-8c5d-9a8deab71653] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003827643s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-931891 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-931891 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m13.898872361s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xm9zf" [088bc5e1-34b7-4f89-8c5d-9a8deab71653] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003695235s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-905445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-905445 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-905445 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-905445 -n no-preload-905445
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-905445 -n no-preload-905445: exit status 2 (286.362554ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-905445 -n no-preload-905445
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-905445 -n no-preload-905445: exit status 2 (288.29228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-905445 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-905445 -n no-preload-905445
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-905445 -n no-preload-905445
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-077221 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-077221 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (31.704951056s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m12.098276455s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-077221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-077221 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-077221 --alsologtostderr -v=3: (1.204821741s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-077221 -n newest-cni-077221
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-077221 -n newest-cni-077221: exit status 7 (69.616116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-077221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-077221 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-077221 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (14.693279762s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-077221 -n newest-cni-077221
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-077221 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-077221 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-077221 -n newest-cni-077221
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-077221 -n newest-cni-077221: exit status 2 (290.2278ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-077221 -n newest-cni-077221
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-077221 -n newest-cni-077221: exit status 2 (295.341733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-077221 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-077221 -n newest-cni-077221
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-077221 -n newest-cni-077221
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m14.928966337s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-200185 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3e315ba6-70a7-4ac0-adde-fbb0e334245b] Pending
helpers_test.go:352: "busybox" [3e315ba6-70a7-4ac0-adde-fbb0e334245b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3e315ba6-70a7-4ac0-adde-fbb0e334245b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003847661s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-200185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-931891 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4bf2cd2f-85ba-46a3-887b-fec46a512163] Pending
helpers_test.go:352: "busybox" [4bf2cd2f-85ba-46a3-887b-fec46a512163] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4bf2cd2f-85ba-46a3-887b-fec46a512163] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003889646s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-931891 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-200185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-200185 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-200185 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-200185 --alsologtostderr -v=3: (11.907293862s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-931891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-931891 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-931891 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-931891 --alsologtostderr -v=3: (12.029267369s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-200185 -n embed-certs-200185
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-200185 -n embed-certs-200185: exit status 7 (106.619317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-200185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-200185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-200185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (49.00130807s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-200185 -n embed-certs-200185
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-931891 -n default-k8s-diff-port-931891
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-931891 -n default-k8s-diff-port-931891: exit status 7 (80.115031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-931891 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-931891 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-931891 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (53.025086732s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-931891 -n default-k8s-diff-port-931891
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-589911 "pgrep -a kubelet"
I0908 17:29:10.027841   11141 config.go:182] Loaded profile config "auto-589911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-589911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-49l24" [17fdd138-32a5-47d3-b8d0-985ea2c35597] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-49l24" [17fdd138-32a5-47d3-b8d0-985ea2c35597] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004521145s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-589911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m1.331228625s)
--- PASS: TestNetworkPlugins/group/calico/Start (61.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-cpssp" [6135bf02-311b-439f-b187-257712b49d9e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003981756s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24zbz" [be5bd586-d502-4d2f-89d5-89b60f52a726] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003266728s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-589911 "pgrep -a kubelet"
I0908 17:29:48.307697   11141 config.go:182] Loaded profile config "kindnet-589911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-589911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9dngh" [0d21eca8-86b8-47cf-85dc-57cc9567a6a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9dngh" [0d21eca8-86b8-47cf-85dc-57cc9567a6a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003845131s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24zbz" [be5bd586-d502-4d2f-89d5-89b60f52a726] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00529765s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-200185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-shtfn" [8e93734e-ba4e-440d-804b-085ffd527ca6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003286832s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-200185 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-200185 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-200185 -n embed-certs-200185
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-200185 -n embed-certs-200185: exit status 2 (290.355851ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-200185 -n embed-certs-200185
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-200185 -n embed-certs-200185: exit status 2 (292.873528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-200185 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-200185 -n embed-certs-200185
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-200185 -n embed-certs-200185
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-shtfn" [8e93734e-ba4e-440d-804b-085ffd527ca6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004442669s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-931891 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-589911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0908 17:30:01.991741   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/addons-739733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m2.993485297s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-931891 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-931891 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-931891 --alsologtostderr -v=1: (1.187065229s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-931891 -n default-k8s-diff-port-931891
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-931891 -n default-k8s-diff-port-931891: exit status 2 (309.67367ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-931891 -n default-k8s-diff-port-931891
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-931891 -n default-k8s-diff-port-931891: exit status 2 (320.457412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-931891 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-931891 --alsologtostderr -v=1: (1.027142736s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-931891 -n default-k8s-diff-port-931891
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-931891 -n default-k8s-diff-port-931891
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m14.681623625s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m0.220258655s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-rqlx2" [00eb844e-2742-485e-b5f8-0bd5dce79100] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-rqlx2" [00eb844e-2742-485e-b5f8-0bd5dce79100] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003935713s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-589911 "pgrep -a kubelet"
I0908 17:30:46.839175   11141 config.go:182] Loaded profile config "calico-589911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-589911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jxtbq" [a5dbcab6-7def-493b-a673-a8663980d731] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 17:30:48.827689   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:30:48.834044   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:30:48.845494   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:30:48.866846   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:30:48.908225   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:30:48.990714   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:30:49.152592   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:30:49.474690   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:30:50.116596   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:30:51.398594   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-jxtbq" [a5dbcab6-7def-493b-a673-a8663980d731] Running
E0908 17:30:53.960297   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003983552s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-589911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-589911 "pgrep -a kubelet"
I0908 17:31:03.304542   11141 config.go:182] Loaded profile config "custom-flannel-589911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-589911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f9kjc" [6611be5d-9ea2-4fbc-873b-43b1b7c9369e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 17:31:04.651923   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:31:04.658517   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:31:04.669889   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:31:04.693197   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:31:04.734598   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:31:04.816064   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:31:04.978265   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:31:05.300396   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:31:05.942245   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:31:07.224478   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-f9kjc" [6611be5d-9ea2-4fbc-873b-43b1b7c9369e] Running
E0908 17:31:09.323592   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:31:09.786716   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004108976s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-589911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-589911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m9.939063428s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-6vtmv" [6ddc9000-7c55-4b49-9ec3-5a0383437361] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004477921s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-589911 "pgrep -a kubelet"
E0908 17:31:25.150430   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/no-preload-905445/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0908 17:31:25.368550   11141 config.go:182] Loaded profile config "enable-default-cni-589911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-589911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xgtnq" [beafe3dd-133e-4f7f-9f41-b6b98d644caa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xgtnq" [beafe3dd-133e-4f7f-9f41-b6b98d644caa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003928281s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-589911 "pgrep -a kubelet"
I0908 17:31:26.750234   11141 config.go:182] Loaded profile config "flannel-589911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-589911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-97hxz" [91eb8501-b1a1-44d7-9b17-9a1bcb70902c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 17:31:27.154927   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/functional-849003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:31:29.806485   11141 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/old-k8s-version-060127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-97hxz" [91eb8501-b1a1-44d7-9b17-9a1bcb70902c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003475191s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-589911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-589911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-589911 "pgrep -a kubelet"
I0908 17:32:27.832304   11141 config.go:182] Loaded profile config "bridge-589911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-589911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jnh2b" [9bf472d5-17e1-4542-adac-4290fd1870a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jnh2b" [9bf472d5-17e1-4542-adac-4290fd1870a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00406275s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-589911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-589911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-739733 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-586668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-586668
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-589911 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-589911" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-589911" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-7450/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:22:23 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-069462
contexts:
- context:
cluster: kubernetes-upgrade-069462
user: kubernetes-upgrade-069462
name: kubernetes-upgrade-069462
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-069462
user:
client-certificate: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/kubernetes-upgrade-069462/client.crt
client-key: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/kubernetes-upgrade-069462/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-589911

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-589911"

                                                
                                                
----------------------- debugLogs end: kubenet-589911 [took: 3.33810443s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-589911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-589911
--- SKIP: TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-589911 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-589911" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-7450/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:23:30 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-172062
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-7450/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:22:23 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-069462
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-7450/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:23:31 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-073444
contexts:
- context:
cluster: NoKubernetes-172062
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:23:30 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: NoKubernetes-172062
name: NoKubernetes-172062
- context:
cluster: kubernetes-upgrade-069462
user: kubernetes-upgrade-069462
name: kubernetes-upgrade-069462
- context:
cluster: pause-073444
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:23:31 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-073444
name: pause-073444
current-context: pause-073444
kind: Config
preferences: {}
users:
- name: NoKubernetes-172062
user:
client-certificate: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/NoKubernetes-172062/client.crt
client-key: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/NoKubernetes-172062/client.key
- name: kubernetes-upgrade-069462
user:
client-certificate: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/kubernetes-upgrade-069462/client.crt
client-key: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/kubernetes-upgrade-069462/client.key
- name: pause-073444
user:
client-certificate: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/pause-073444/client.crt
client-key: /home/jenkins/minikube-integration/21504-7450/.minikube/profiles/pause-073444/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-589911

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-589911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589911"

                                                
                                                
----------------------- debugLogs end: cilium-589911 [took: 3.308344978s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-589911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-589911
--- SKIP: TestNetworkPlugins/group/cilium (3.49s)

                                                
                                    
Copied to clipboard