Test Report: Docker_Linux_crio 21508

                    
                      8932374f20a738e68cf28dc9e127463468f1eb30:2025-09-08:41334
                    
                

Test fail (6/332)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.14
98 TestFunctional/parallel/ServiceCmdConnect 603.03
138 TestFunctional/parallel/ServiceCmd/DeployApp 600.55
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
154 TestFunctional/parallel/ServiceCmd/Format 0.52
155 TestFunctional/parallel/ServiceCmd/URL 0.52
x
+
TestAddons/parallel/Ingress (158.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-329194 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-329194 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-329194 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [69cc8631-41ef-4f22-ad8e-d0b7df04394b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [69cc8631-41ef-4f22-ad8e-d0b7df04394b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.003777563s
I0908 13:50:34.246496  498696 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-329194 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.084649268s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-329194 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-329194
helpers_test.go:243: (dbg) docker inspect addons-329194:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d117662d53410b4993153a8256a0f35b49c0628f5b7ec0b5b4707b6e1b98ca8",
	        "Created": "2025-09-08T13:47:15.919031008Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500593,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:47:15.949570292Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/2d117662d53410b4993153a8256a0f35b49c0628f5b7ec0b5b4707b6e1b98ca8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d117662d53410b4993153a8256a0f35b49c0628f5b7ec0b5b4707b6e1b98ca8/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d117662d53410b4993153a8256a0f35b49c0628f5b7ec0b5b4707b6e1b98ca8/hosts",
	        "LogPath": "/var/lib/docker/containers/2d117662d53410b4993153a8256a0f35b49c0628f5b7ec0b5b4707b6e1b98ca8/2d117662d53410b4993153a8256a0f35b49c0628f5b7ec0b5b4707b6e1b98ca8-json.log",
	        "Name": "/addons-329194",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-329194:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-329194",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d117662d53410b4993153a8256a0f35b49c0628f5b7ec0b5b4707b6e1b98ca8",
	                "LowerDir": "/var/lib/docker/overlay2/1919e323a301a6b7ebec6fe7be8acab8e6daa43a599ad72a858818fa5ed4a1f9-init/diff:/var/lib/docker/overlay2/b93813c424f19944b84d6650258ee42fc88dbf4e092111f8eb9116f587feb593/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1919e323a301a6b7ebec6fe7be8acab8e6daa43a599ad72a858818fa5ed4a1f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1919e323a301a6b7ebec6fe7be8acab8e6daa43a599ad72a858818fa5ed4a1f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1919e323a301a6b7ebec6fe7be8acab8e6daa43a599ad72a858818fa5ed4a1f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-329194",
	                "Source": "/var/lib/docker/volumes/addons-329194/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-329194",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-329194",
	                "name.minikube.sigs.k8s.io": "addons-329194",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb3d0e4e9edadb314779f260fa792b0595a20e948a254af5d54da8a26719ab90",
	            "SandboxKey": "/var/run/docker/netns/fb3d0e4e9eda",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-329194": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:43:30:b7:6b:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5f23afdddb33bf3af8b556ed5d26a6390b3e119c608bce782cb88fe4b37bd29a",
	                    "EndpointID": "40b2360216a2f3a5033d76edb6b5677c1ba33badb73f01b87345ecc4d24ac1eb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-329194",
	                        "2d117662d534"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-329194 -n addons-329194
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-329194 logs -n 25: (1.203801394s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-761448 --alsologtostderr --binary-mirror http://127.0.0.1:38697 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-761448 │ jenkins │ v1.36.0 │ 08 Sep 25 13:46 UTC │                     │
	│ delete  │ -p binary-mirror-761448                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-761448 │ jenkins │ v1.36.0 │ 08 Sep 25 13:46 UTC │ 08 Sep 25 13:46 UTC │
	│ addons  │ enable dashboard -p addons-329194                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:46 UTC │                     │
	│ addons  │ disable dashboard -p addons-329194                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:46 UTC │                     │
	│ start   │ -p addons-329194 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:46 UTC │ 08 Sep 25 13:49 UTC │
	│ addons  │ addons-329194 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:49 UTC │ 08 Sep 25 13:49 UTC │
	│ addons  │ addons-329194 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:49 UTC │ 08 Sep 25 13:49 UTC │
	│ addons  │ enable headlamp -p addons-329194 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:49 UTC │ 08 Sep 25 13:49 UTC │
	│ addons  │ addons-329194 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:49 UTC │ 08 Sep 25 13:50 UTC │
	│ addons  │ addons-329194 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:49 UTC │ 08 Sep 25 13:50 UTC │
	│ addons  │ addons-329194 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:50 UTC │
	│ ip      │ addons-329194 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:50 UTC │
	│ addons  │ addons-329194 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:50 UTC │
	│ addons  │ addons-329194 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:50 UTC │
	│ addons  │ addons-329194 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:50 UTC │
	│ ssh     │ addons-329194 ssh cat /opt/local-path-provisioner/pvc-c22325b4-df5e-4394-9d6a-1a970c9e4697_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:50 UTC │
	│ addons  │ addons-329194 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:51 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-329194                                                                                                                                                                                                                                                                                                                                                                                           │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:50 UTC │
	│ addons  │ addons-329194 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:50 UTC │
	│ addons  │ addons-329194 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:50 UTC │
	│ ssh     │ addons-329194 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │                     │
	│ addons  │ addons-329194 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:50 UTC │
	│ addons  │ addons-329194 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:50 UTC │ 08 Sep 25 13:50 UTC │
	│ addons  │ addons-329194 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:51 UTC │ 08 Sep 25 13:51 UTC │
	│ ip      │ addons-329194 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-329194        │ jenkins │ v1.36.0 │ 08 Sep 25 13:52 UTC │ 08 Sep 25 13:52 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:46:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:46:52.160891  499972 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:46:52.161042  499972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:46:52.161054  499972 out.go:374] Setting ErrFile to fd 2...
	I0908 13:46:52.161058  499972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:46:52.161272  499972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
	I0908 13:46:52.161977  499972 out.go:368] Setting JSON to false
	I0908 13:46:52.162984  499972 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12558,"bootTime":1757326654,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:46:52.163056  499972 start.go:140] virtualization: kvm guest
	I0908 13:46:52.165599  499972 out.go:179] * [addons-329194] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 13:46:52.166829  499972 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:46:52.166861  499972 notify.go:220] Checking for updates...
	I0908 13:46:52.169480  499972 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:46:52.170669  499972 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	I0908 13:46:52.171722  499972 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	I0908 13:46:52.173005  499972 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 13:46:52.174250  499972 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:46:52.175589  499972 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:46:52.197195  499972 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:46:52.197322  499972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:46:52.243333  499972 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 13:46:52.234141879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:46:52.243435  499972 docker.go:318] overlay module found
	I0908 13:46:52.245077  499972 out.go:179] * Using the docker driver based on user configuration
	I0908 13:46:52.246028  499972 start.go:304] selected driver: docker
	I0908 13:46:52.246044  499972 start.go:918] validating driver "docker" against <nil>
	I0908 13:46:52.246056  499972 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:46:52.246834  499972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:46:52.294303  499972 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 13:46:52.285900973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:46:52.294463  499972 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:46:52.294682  499972 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:46:52.296248  499972 out.go:179] * Using Docker driver with root privileges
	I0908 13:46:52.297523  499972 cni.go:84] Creating CNI manager for ""
	I0908 13:46:52.297592  499972 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:46:52.297604  499972 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 13:46:52.297675  499972 start.go:348] cluster config:
	{Name:addons-329194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-329194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0908 13:46:52.299074  499972 out.go:179] * Starting "addons-329194" primary control-plane node in "addons-329194" cluster
	I0908 13:46:52.300226  499972 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 13:46:52.301526  499972 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:46:52.302723  499972 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:46:52.302767  499972 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-494960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 13:46:52.302778  499972 cache.go:58] Caching tarball of preloaded images
	I0908 13:46:52.302820  499972 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:46:52.302892  499972 preload.go:172] Found /home/jenkins/minikube-integration/21508-494960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 13:46:52.302904  499972 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 13:46:52.303323  499972 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/config.json ...
	I0908 13:46:52.303355  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/config.json: {Name:mk09dfc781dacbc7d3ae41d65b75f9c60c346c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:46:52.319767  499972 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:46:52.319984  499972 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 13:46:52.320012  499972 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 13:46:52.320018  499972 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 13:46:52.320030  499972 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 13:46:52.320042  499972 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from local cache
	I0908 13:47:03.931951  499972 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from cached tarball
	I0908 13:47:03.932002  499972 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:47:03.932040  499972 start.go:360] acquireMachinesLock for addons-329194: {Name:mka0c7ed5deffbc169ae988852acb11543a97583 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:47:03.932145  499972 start.go:364] duration metric: took 83.934µs to acquireMachinesLock for "addons-329194"
	I0908 13:47:03.932172  499972 start.go:93] Provisioning new machine with config: &{Name:addons-329194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-329194 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 13:47:03.932268  499972 start.go:125] createHost starting for "" (driver="docker")
	I0908 13:47:03.933875  499972 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0908 13:47:03.934111  499972 start.go:159] libmachine.API.Create for "addons-329194" (driver="docker")
	I0908 13:47:03.934146  499972 client.go:168] LocalClient.Create starting
	I0908 13:47:03.934248  499972 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21508-494960/.minikube/certs/ca.pem
	I0908 13:47:04.089259  499972 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21508-494960/.minikube/certs/cert.pem
	I0908 13:47:04.405073  499972 cli_runner.go:164] Run: docker network inspect addons-329194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 13:47:04.422640  499972 cli_runner.go:211] docker network inspect addons-329194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 13:47:04.422756  499972 network_create.go:284] running [docker network inspect addons-329194] to gather additional debugging logs...
	I0908 13:47:04.422782  499972 cli_runner.go:164] Run: docker network inspect addons-329194
	W0908 13:47:04.438698  499972 cli_runner.go:211] docker network inspect addons-329194 returned with exit code 1
	I0908 13:47:04.438736  499972 network_create.go:287] error running [docker network inspect addons-329194]: docker network inspect addons-329194: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-329194 not found
	I0908 13:47:04.438754  499972 network_create.go:289] output of [docker network inspect addons-329194]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-329194 not found
	
	** /stderr **
	I0908 13:47:04.438869  499972 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:47:04.455785  499972 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c7f050}
	I0908 13:47:04.455840  499972 network_create.go:124] attempt to create docker network addons-329194 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0908 13:47:04.455902  499972 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-329194 addons-329194
	I0908 13:47:04.504375  499972 network_create.go:108] docker network addons-329194 192.168.49.0/24 created
	I0908 13:47:04.504408  499972 kic.go:121] calculated static IP "192.168.49.2" for the "addons-329194" container
	I0908 13:47:04.504524  499972 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 13:47:04.521025  499972 cli_runner.go:164] Run: docker volume create addons-329194 --label name.minikube.sigs.k8s.io=addons-329194 --label created_by.minikube.sigs.k8s.io=true
	I0908 13:47:04.538438  499972 oci.go:103] Successfully created a docker volume addons-329194
	I0908 13:47:04.538529  499972 cli_runner.go:164] Run: docker run --rm --name addons-329194-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-329194 --entrypoint /usr/bin/test -v addons-329194:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 13:47:11.445810  499972 cli_runner.go:217] Completed: docker run --rm --name addons-329194-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-329194 --entrypoint /usr/bin/test -v addons-329194:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib: (6.907218755s)
	I0908 13:47:11.445852  499972 oci.go:107] Successfully prepared a docker volume addons-329194
	I0908 13:47:11.445885  499972 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:47:11.445912  499972 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 13:47:11.445978  499972 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21508-494960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-329194:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 13:47:15.857724  499972 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21508-494960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-329194:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.411690507s)
	I0908 13:47:15.857763  499972 kic.go:203] duration metric: took 4.411846438s to extract preloaded images to volume ...
	W0908 13:47:15.857924  499972 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 13:47:15.858049  499972 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 13:47:15.904209  499972 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-329194 --name addons-329194 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-329194 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-329194 --network addons-329194 --ip 192.168.49.2 --volume addons-329194:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 13:47:16.152958  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Running}}
	I0908 13:47:16.171780  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:16.191529  499972 cli_runner.go:164] Run: docker exec addons-329194 stat /var/lib/dpkg/alternatives/iptables
	I0908 13:47:16.237251  499972 oci.go:144] the created container "addons-329194" has a running status.
	I0908 13:47:16.237293  499972 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa...
	I0908 13:47:16.689709  499972 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 13:47:16.711204  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:16.730793  499972 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 13:47:16.730814  499972 kic_runner.go:114] Args: [docker exec --privileged addons-329194 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 13:47:16.777008  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:16.795507  499972 machine.go:93] provisionDockerMachine start ...
	I0908 13:47:16.795627  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:16.814417  499972 main.go:141] libmachine: Using SSH client type: native
	I0908 13:47:16.814663  499972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0908 13:47:16.814678  499972 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:47:16.932409  499972 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-329194
	
	I0908 13:47:16.932447  499972 ubuntu.go:182] provisioning hostname "addons-329194"
	I0908 13:47:16.932572  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:16.951538  499972 main.go:141] libmachine: Using SSH client type: native
	I0908 13:47:16.951764  499972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0908 13:47:16.951780  499972 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-329194 && echo "addons-329194" | sudo tee /etc/hostname
	I0908 13:47:17.079743  499972 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-329194
	
	I0908 13:47:17.079813  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:17.097468  499972 main.go:141] libmachine: Using SSH client type: native
	I0908 13:47:17.097731  499972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0908 13:47:17.097751  499972 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-329194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-329194/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-329194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:47:17.212766  499972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:47:17.212799  499972 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-494960/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-494960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-494960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-494960/.minikube}
	I0908 13:47:17.212823  499972 ubuntu.go:190] setting up certificates
	I0908 13:47:17.212839  499972 provision.go:84] configureAuth start
	I0908 13:47:17.212901  499972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-329194
	I0908 13:47:17.229869  499972 provision.go:143] copyHostCerts
	I0908 13:47:17.229950  499972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-494960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-494960/.minikube/key.pem (1675 bytes)
	I0908 13:47:17.230075  499972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-494960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-494960/.minikube/ca.pem (1078 bytes)
	I0908 13:47:17.230157  499972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-494960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-494960/.minikube/cert.pem (1123 bytes)
	I0908 13:47:17.230226  499972 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-494960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-494960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-494960/.minikube/certs/ca-key.pem org=jenkins.addons-329194 san=[127.0.0.1 192.168.49.2 addons-329194 localhost minikube]
	I0908 13:47:17.624318  499972 provision.go:177] copyRemoteCerts
	I0908 13:47:17.624395  499972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:47:17.624447  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:17.642127  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:17.729236  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 13:47:17.752125  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 13:47:17.774795  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 13:47:17.797830  499972 provision.go:87] duration metric: took 584.974255ms to configureAuth
	I0908 13:47:17.797869  499972 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:47:17.798059  499972 config.go:182] Loaded profile config "addons-329194": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:47:17.798188  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:17.815345  499972 main.go:141] libmachine: Using SSH client type: native
	I0908 13:47:17.815615  499972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0908 13:47:17.815638  499972 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 13:47:18.020413  499972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 13:47:18.020446  499972 machine.go:96] duration metric: took 1.224914144s to provisionDockerMachine
	I0908 13:47:18.020485  499972 client.go:171] duration metric: took 14.086306511s to LocalClient.Create
	I0908 13:47:18.020513  499972 start.go:167] duration metric: took 14.086403482s to libmachine.API.Create "addons-329194"
	I0908 13:47:18.020524  499972 start.go:293] postStartSetup for "addons-329194" (driver="docker")
	I0908 13:47:18.020534  499972 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:47:18.020604  499972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:47:18.020641  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:18.039185  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:18.125770  499972 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:47:18.129202  499972 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:47:18.129238  499972 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:47:18.129249  499972 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:47:18.129259  499972 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:47:18.129274  499972 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-494960/.minikube/addons for local assets ...
	I0908 13:47:18.129339  499972 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-494960/.minikube/files for local assets ...
	I0908 13:47:18.129371  499972 start.go:296] duration metric: took 108.83952ms for postStartSetup
	I0908 13:47:18.129694  499972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-329194
	I0908 13:47:18.147059  499972 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/config.json ...
	I0908 13:47:18.147358  499972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:47:18.147405  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:18.164582  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:18.249554  499972 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:47:18.253815  499972 start.go:128] duration metric: took 14.321530386s to createHost
	I0908 13:47:18.253845  499972 start.go:83] releasing machines lock for "addons-329194", held for 14.321687508s
	I0908 13:47:18.253931  499972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-329194
	I0908 13:47:18.271824  499972 ssh_runner.go:195] Run: cat /version.json
	I0908 13:47:18.271896  499972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:47:18.271963  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:18.271906  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:18.290914  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:18.291329  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:18.372307  499972 ssh_runner.go:195] Run: systemctl --version
	I0908 13:47:18.470899  499972 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 13:47:18.609461  499972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:47:18.613854  499972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:47:18.632209  499972 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:47:18.632305  499972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:47:18.658802  499972 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 13:47:18.658827  499972 start.go:495] detecting cgroup driver to use...
	I0908 13:47:18.658861  499972 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:47:18.658901  499972 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 13:47:18.673104  499972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 13:47:18.683572  499972 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:47:18.683636  499972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:47:18.696061  499972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:47:18.709471  499972 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:47:18.787273  499972 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:47:18.874633  499972 docker.go:234] disabling docker service ...
	I0908 13:47:18.874698  499972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:47:18.893484  499972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:47:18.904676  499972 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:47:18.987033  499972 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:47:19.071397  499972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:47:19.082551  499972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:47:19.098277  499972 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 13:47:19.098349  499972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:47:19.107769  499972 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 13:47:19.107829  499972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:47:19.117401  499972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:47:19.126582  499972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:47:19.136447  499972 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:47:19.145438  499972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:47:19.154776  499972 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:47:19.170122  499972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:47:19.179495  499972 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:47:19.187513  499972 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:47:19.196090  499972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:47:19.275487  499972 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 13:47:19.384758  499972 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 13:47:19.384842  499972 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 13:47:19.388419  499972 start.go:563] Will wait 60s for crictl version
	I0908 13:47:19.388517  499972 ssh_runner.go:195] Run: which crictl
	I0908 13:47:19.391662  499972 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:47:19.425984  499972 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 13:47:19.426070  499972 ssh_runner.go:195] Run: crio --version
	I0908 13:47:19.462267  499972 ssh_runner.go:195] Run: crio --version
	I0908 13:47:19.499040  499972 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 13:47:19.500570  499972 cli_runner.go:164] Run: docker network inspect addons-329194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:47:19.517321  499972 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 13:47:19.521027  499972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:47:19.531661  499972 kubeadm.go:875] updating cluster {Name:addons-329194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-329194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:47:19.531783  499972 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:47:19.531823  499972 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:47:19.600627  499972 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 13:47:19.600653  499972 crio.go:433] Images already preloaded, skipping extraction
	I0908 13:47:19.600697  499972 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:47:19.633459  499972 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 13:47:19.633488  499972 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:47:19.633499  499972 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0908 13:47:19.633610  499972 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-329194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-329194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:47:19.633678  499972 ssh_runner.go:195] Run: crio config
	I0908 13:47:19.675662  499972 cni.go:84] Creating CNI manager for ""
	I0908 13:47:19.675688  499972 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:47:19.675702  499972 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:47:19.675724  499972 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-329194 NodeName:addons-329194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:47:19.675853  499972 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-329194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:47:19.675916  499972 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:47:19.684477  499972 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:47:19.684557  499972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:47:19.692857  499972 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0908 13:47:19.709391  499972 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:47:19.726421  499972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 13:47:19.743338  499972 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:47:19.746822  499972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:47:19.757097  499972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:47:19.839184  499972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:47:19.852321  499972 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194 for IP: 192.168.49.2
	I0908 13:47:19.852350  499972 certs.go:194] generating shared ca certs ...
	I0908 13:47:19.852374  499972 certs.go:226] acquiring lock for ca certs: {Name:mk0001ceee7360ccd7de3e9f7a39d694f4494b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:19.852552  499972 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-494960/.minikube/ca.key
	I0908 13:47:20.550346  499972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-494960/.minikube/ca.crt ...
	I0908 13:47:20.550381  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/ca.crt: {Name:mk45e5b343f9e79ea5c9d09488e9267987da9eb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:20.550554  499972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-494960/.minikube/ca.key ...
	I0908 13:47:20.550569  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/ca.key: {Name:mk0bef261a6b90ccf492085d527282812b18800d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:20.550647  499972 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-494960/.minikube/proxy-client-ca.key
	I0908 13:47:20.630941  499972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-494960/.minikube/proxy-client-ca.crt ...
	I0908 13:47:20.630974  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/proxy-client-ca.crt: {Name:mk6f6378b37602badbec50322b565798ce579693 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:20.631174  499972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-494960/.minikube/proxy-client-ca.key ...
	I0908 13:47:20.631199  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/proxy-client-ca.key: {Name:mk278b1870349e9059b61633971432114540ac5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:20.631301  499972 certs.go:256] generating profile certs ...
	I0908 13:47:20.631380  499972 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.key
	I0908 13:47:20.631397  499972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt with IP's: []
	I0908 13:47:20.759585  499972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt ...
	I0908 13:47:20.759622  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: {Name:mk78626b55e4fc1af74da639f42e1b86ba04a426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:20.759830  499972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.key ...
	I0908 13:47:20.759849  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.key: {Name:mk8d0921647456330a2512c2a87110a02224f8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:20.759960  499972 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.key.a1130027
	I0908 13:47:20.759989  499972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.crt.a1130027 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0908 13:47:20.947776  499972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.crt.a1130027 ...
	I0908 13:47:20.947813  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.crt.a1130027: {Name:mkb6b2fc96db9e96404dd8ae49c8d36446715a53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:20.948035  499972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.key.a1130027 ...
	I0908 13:47:20.948054  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.key.a1130027: {Name:mkc65cb8e9c96a7bd5d81f28cadee8f0089308e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:20.948161  499972 certs.go:381] copying /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.crt.a1130027 -> /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.crt
	I0908 13:47:20.948311  499972 certs.go:385] copying /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.key.a1130027 -> /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.key
	I0908 13:47:20.948395  499972 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/proxy-client.key
	I0908 13:47:20.948421  499972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/proxy-client.crt with IP's: []
	I0908 13:47:20.995251  499972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/proxy-client.crt ...
	I0908 13:47:20.995285  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/proxy-client.crt: {Name:mk0112f7e726e24d061ae0a3571c79f400d07e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:20.995480  499972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/proxy-client.key ...
	I0908 13:47:20.995503  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/proxy-client.key: {Name:mk1a8f085f7fe481416bfb163cab26360d51f630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:20.995735  499972 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-494960/.minikube/certs/ca-key.pem (1671 bytes)
	I0908 13:47:20.995779  499972 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-494960/.minikube/certs/ca.pem (1078 bytes)
	I0908 13:47:20.995825  499972 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-494960/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:47:20.995866  499972 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-494960/.minikube/certs/key.pem (1675 bytes)
	I0908 13:47:20.996607  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:47:21.020197  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 13:47:21.042594  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:47:21.064898  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 13:47:21.087523  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 13:47:21.110004  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 13:47:21.132732  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:47:21.154683  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 13:47:21.177145  499972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-494960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:47:21.200358  499972 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:47:21.217207  499972 ssh_runner.go:195] Run: openssl version
	I0908 13:47:21.222617  499972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:47:21.231683  499972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:47:21.235002  499972 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:47 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:47:21.235062  499972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:47:21.241641  499972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:47:21.250495  499972 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:47:21.253696  499972 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 13:47:21.253746  499972 kubeadm.go:392] StartCluster: {Name:addons-329194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-329194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:47:21.253848  499972 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 13:47:21.253926  499972 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:47:21.288513  499972 cri.go:89] found id: ""
	I0908 13:47:21.288578  499972 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:47:21.297053  499972 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 13:47:21.305719  499972 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 13:47:21.305788  499972 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 13:47:21.313963  499972 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 13:47:21.314004  499972 kubeadm.go:157] found existing configuration files:
	
	I0908 13:47:21.314053  499972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 13:47:21.322369  499972 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 13:47:21.322423  499972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 13:47:21.330594  499972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 13:47:21.338740  499972 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 13:47:21.338808  499972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 13:47:21.346736  499972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 13:47:21.355023  499972 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 13:47:21.355080  499972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 13:47:21.362915  499972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 13:47:21.371034  499972 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 13:47:21.371119  499972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 13:47:21.378937  499972 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 13:47:21.432287  499972 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 13:47:21.432606  499972 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0908 13:47:21.492848  499972 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 13:47:32.553339  499972 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 13:47:32.553434  499972 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 13:47:32.553586  499972 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 13:47:32.553670  499972 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0908 13:47:32.553708  499972 kubeadm.go:310] OS: Linux
	I0908 13:47:32.553766  499972 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 13:47:32.553830  499972 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 13:47:32.553900  499972 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 13:47:32.553949  499972 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 13:47:32.553992  499972 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 13:47:32.554058  499972 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 13:47:32.554111  499972 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 13:47:32.554154  499972 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 13:47:32.554195  499972 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 13:47:32.554324  499972 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 13:47:32.554465  499972 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 13:47:32.554595  499972 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 13:47:32.554703  499972 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 13:47:32.556327  499972 out.go:252]   - Generating certificates and keys ...
	I0908 13:47:32.556429  499972 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 13:47:32.556545  499972 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 13:47:32.556656  499972 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 13:47:32.556715  499972 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 13:47:32.556771  499972 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 13:47:32.556847  499972 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 13:47:32.556911  499972 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 13:47:32.557073  499972 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-329194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 13:47:32.557132  499972 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 13:47:32.557296  499972 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-329194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 13:47:32.557387  499972 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 13:47:32.557471  499972 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 13:47:32.557542  499972 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 13:47:32.557621  499972 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 13:47:32.557705  499972 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 13:47:32.557772  499972 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 13:47:32.557821  499972 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 13:47:32.557876  499972 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 13:47:32.557928  499972 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 13:47:32.557994  499972 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 13:47:32.558054  499972 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 13:47:32.559384  499972 out.go:252]   - Booting up control plane ...
	I0908 13:47:32.559460  499972 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 13:47:32.559524  499972 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 13:47:32.559578  499972 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 13:47:32.559659  499972 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 13:47:32.559745  499972 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 13:47:32.559893  499972 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 13:47:32.560008  499972 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 13:47:32.560050  499972 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 13:47:32.560151  499972 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 13:47:32.560252  499972 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 13:47:32.560327  499972 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000901737s
	I0908 13:47:32.560405  499972 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 13:47:32.560508  499972 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0908 13:47:32.560593  499972 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 13:47:32.560662  499972 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 13:47:32.560731  499972 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.800552825s
	I0908 13:47:32.560787  499972 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.081883891s
	I0908 13:47:32.560865  499972 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.501275772s
	I0908 13:47:32.560996  499972 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 13:47:32.561109  499972 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 13:47:32.561204  499972 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 13:47:32.561467  499972 kubeadm.go:310] [mark-control-plane] Marking the node addons-329194 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 13:47:32.561514  499972 kubeadm.go:310] [bootstrap-token] Using token: 0s0m73.wnjinj4ljyiaggub
	I0908 13:47:32.562875  499972 out.go:252]   - Configuring RBAC rules ...
	I0908 13:47:32.562957  499972 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 13:47:32.563032  499972 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 13:47:32.563150  499972 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 13:47:32.563275  499972 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 13:47:32.563431  499972 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 13:47:32.563563  499972 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 13:47:32.563694  499972 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 13:47:32.563755  499972 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 13:47:32.563822  499972 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 13:47:32.563833  499972 kubeadm.go:310] 
	I0908 13:47:32.563880  499972 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 13:47:32.563885  499972 kubeadm.go:310] 
	I0908 13:47:32.563949  499972 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 13:47:32.563957  499972 kubeadm.go:310] 
	I0908 13:47:32.563985  499972 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 13:47:32.564050  499972 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 13:47:32.564093  499972 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 13:47:32.564099  499972 kubeadm.go:310] 
	I0908 13:47:32.564140  499972 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 13:47:32.564146  499972 kubeadm.go:310] 
	I0908 13:47:32.564182  499972 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 13:47:32.564188  499972 kubeadm.go:310] 
	I0908 13:47:32.564238  499972 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 13:47:32.564315  499972 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 13:47:32.564394  499972 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 13:47:32.564401  499972 kubeadm.go:310] 
	I0908 13:47:32.564562  499972 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 13:47:32.564679  499972 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 13:47:32.564695  499972 kubeadm.go:310] 
	I0908 13:47:32.564817  499972 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0s0m73.wnjinj4ljyiaggub \
	I0908 13:47:32.564983  499972 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b1c18b1f8a7a6a9e013d511358fc4780e2b92ded00768885a89b1bf5aef26a3a \
	I0908 13:47:32.565015  499972 kubeadm.go:310] 	--control-plane 
	I0908 13:47:32.565021  499972 kubeadm.go:310] 
	I0908 13:47:32.565139  499972 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 13:47:32.565149  499972 kubeadm.go:310] 
	I0908 13:47:32.565307  499972 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0s0m73.wnjinj4ljyiaggub \
	I0908 13:47:32.565458  499972 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b1c18b1f8a7a6a9e013d511358fc4780e2b92ded00768885a89b1bf5aef26a3a 
	I0908 13:47:32.565472  499972 cni.go:84] Creating CNI manager for ""
	I0908 13:47:32.565479  499972 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:47:32.566990  499972 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 13:47:32.568205  499972 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 13:47:32.572444  499972 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 13:47:32.572486  499972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 13:47:32.590314  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 13:47:32.800542  499972 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 13:47:32.800648  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-329194 minikube.k8s.io/updated_at=2025_09_08T13_47_32_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba minikube.k8s.io/name=addons-329194 minikube.k8s.io/primary=true
	I0908 13:47:32.800650  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:47:32.807816  499972 ops.go:34] apiserver oom_adj: -16
	I0908 13:47:33.018254  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:47:33.518637  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:47:34.018406  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:47:34.518398  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:47:35.019135  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:47:35.519016  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:47:36.018733  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:47:36.518442  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:47:37.018871  499972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:47:37.101059  499972 kubeadm.go:1105] duration metric: took 4.3004806s to wait for elevateKubeSystemPrivileges
	I0908 13:47:37.101106  499972 kubeadm.go:394] duration metric: took 15.847365166s to StartCluster
	I0908 13:47:37.101137  499972 settings.go:142] acquiring lock: {Name:mk035ddfe43df9c8bba1830e523bde6a8346cd20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:37.101263  499972 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-494960/kubeconfig
	I0908 13:47:37.101709  499972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/kubeconfig: {Name:mk64645b0fed21ef19227faa54b0fdeaec30c94c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:47:37.101965  499972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 13:47:37.101981  499972 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 13:47:37.102062  499972 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 13:47:37.102176  499972 addons.go:69] Setting yakd=true in profile "addons-329194"
	I0908 13:47:37.102184  499972 addons.go:69] Setting inspektor-gadget=true in profile "addons-329194"
	I0908 13:47:37.102200  499972 addons.go:238] Setting addon inspektor-gadget=true in "addons-329194"
	I0908 13:47:37.102209  499972 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-329194"
	I0908 13:47:37.102217  499972 addons.go:69] Setting default-storageclass=true in profile "addons-329194"
	I0908 13:47:37.102232  499972 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-329194"
	I0908 13:47:37.102249  499972 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-329194"
	I0908 13:47:37.102250  499972 config.go:182] Loaded profile config "addons-329194": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:47:37.102259  499972 addons.go:69] Setting registry=true in profile "addons-329194"
	I0908 13:47:37.102259  499972 addons.go:69] Setting ingress-dns=true in profile "addons-329194"
	I0908 13:47:37.102253  499972 addons.go:69] Setting ingress=true in profile "addons-329194"
	I0908 13:47:37.102294  499972 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-329194"
	I0908 13:47:37.102295  499972 addons.go:69] Setting gcp-auth=true in profile "addons-329194"
	I0908 13:47:37.102302  499972 addons.go:69] Setting volcano=true in profile "addons-329194"
	I0908 13:47:37.102309  499972 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-329194"
	I0908 13:47:37.102314  499972 addons.go:238] Setting addon volcano=true in "addons-329194"
	I0908 13:47:37.102312  499972 addons.go:238] Setting addon ingress=true in "addons-329194"
	I0908 13:47:37.102319  499972 mustload.go:65] Loading cluster: addons-329194
	I0908 13:47:37.102344  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.102362  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.102382  499972 addons.go:69] Setting cloud-spanner=true in profile "addons-329194"
	I0908 13:47:37.102416  499972 addons.go:238] Setting addon cloud-spanner=true in "addons-329194"
	I0908 13:47:37.102438  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.102549  499972 config.go:182] Loaded profile config "addons-329194": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:47:37.102706  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.102735  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.102785  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.102861  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.102882  499972 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-329194"
	I0908 13:47:37.102890  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.102898  499972 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-329194"
	I0908 13:47:37.102923  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.103022  499972 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-329194"
	I0908 13:47:37.103039  499972 addons.go:69] Setting volumesnapshots=true in profile "addons-329194"
	I0908 13:47:37.103054  499972 addons.go:238] Setting addon volumesnapshots=true in "addons-329194"
	I0908 13:47:37.103073  499972 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-329194"
	I0908 13:47:37.103094  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.103093  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.103339  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.103609  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.104227  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.102865  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.102272  499972 addons.go:238] Setting addon registry=true in "addons-329194"
	I0908 13:47:37.109231  499972 out.go:179] * Verifying Kubernetes components...
	I0908 13:47:37.109231  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.102253  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.102200  499972 addons.go:238] Setting addon yakd=true in "addons-329194"
	I0908 13:47:37.109437  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.102205  499972 addons.go:69] Setting metrics-server=true in profile "addons-329194"
	I0908 13:47:37.102277  499972 addons.go:69] Setting registry-creds=true in profile "addons-329194"
	I0908 13:47:37.102284  499972 addons.go:69] Setting storage-provisioner=true in profile "addons-329194"
	I0908 13:47:37.102254  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.102275  499972 addons.go:238] Setting addon ingress-dns=true in "addons-329194"
	I0908 13:47:37.109579  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.110003  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.110034  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.110040  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.110261  499972 addons.go:238] Setting addon metrics-server=true in "addons-329194"
	I0908 13:47:37.110370  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.110561  499972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:47:37.110744  499972 addons.go:238] Setting addon registry-creds=true in "addons-329194"
	I0908 13:47:37.110778  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.111075  499972 addons.go:238] Setting addon storage-provisioner=true in "addons-329194"
	I0908 13:47:37.111170  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.133174  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.133743  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.133753  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.134388  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.135992  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.136658  499972 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0908 13:47:37.137823  499972 addons.go:238] Setting addon default-storageclass=true in "addons-329194"
	I0908 13:47:37.137874  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.137954  499972 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 13:47:37.137970  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 13:47:37.138018  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.138312  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.146561  499972 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 13:47:37.147845  499972 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:47:37.148954  499972 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:47:37.150415  499972 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 13:47:37.150437  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 13:47:37.150519  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.151781  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.157198  499972 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 13:47:37.158624  499972 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 13:47:37.158649  499972 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 13:47:37.158714  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.161492  499972 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 13:47:37.164862  499972 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 13:47:37.166182  499972 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 13:47:37.167385  499972 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 13:47:37.168575  499972 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 13:47:37.169885  499972 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 13:47:37.171094  499972 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 13:47:37.172493  499972 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 13:47:37.173692  499972 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 13:47:37.173717  499972 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 13:47:37.173793  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.177720  499972 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 13:47:37.179078  499972 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 13:47:37.179098  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 13:47:37.179157  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.188963  499972 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 13:47:37.190235  499972 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 13:47:37.190261  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 13:47:37.190348  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.202615  499972 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 13:47:37.202984  499972 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 13:47:37.203788  499972 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 13:47:37.203813  499972 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 13:47:37.203886  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.204403  499972 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 13:47:37.206026  499972 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 13:47:37.206137  499972 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 13:47:37.206149  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 13:47:37.206199  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.207120  499972 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 13:47:37.207136  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 13:47:37.207200  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	W0908 13:47:37.207940  499972 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 13:47:37.208041  499972 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 13:47:37.209118  499972 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 13:47:37.209141  499972 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 13:47:37.209194  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.210522  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.216594  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.229398  499972 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 13:47:37.229578  499972 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0908 13:47:37.229617  499972 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-329194"
	I0908 13:47:37.229683  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:37.230286  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:37.230517  499972 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 13:47:37.230541  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 13:47:37.230603  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.230958  499972 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 13:47:37.230974  499972 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 13:47:37.231020  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.232193  499972 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:47:37.233270  499972 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:47:37.233289  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:47:37.233356  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.237540  499972 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:47:37.237567  499972 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:47:37.237628  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.270640  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.270640  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.275998  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.276090  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.276090  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.278924  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.279797  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.281834  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.288345  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.288616  499972 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 13:47:37.292585  499972 out.go:179]   - Using image docker.io/busybox:stable
	I0908 13:47:37.293826  499972 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 13:47:37.293845  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 13:47:37.293910  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:37.297055  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.300849  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.302698  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	W0908 13:47:37.315983  499972 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 13:47:37.316029  499972 retry.go:31] will retry after 336.333929ms: ssh: handshake failed: EOF
	I0908 13:47:37.332554  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:37.498027  499972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 13:47:37.498170  499972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:47:37.509434  499972 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 13:47:37.509517  499972 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 13:47:37.598026  499972 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 13:47:37.598058  499972 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 13:47:37.610234  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 13:47:37.708528  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:47:37.711039  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 13:47:37.790755  499972 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 13:47:37.790796  499972 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 13:47:37.798544  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 13:47:37.800060  499972 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 13:47:37.800134  499972 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 13:47:37.800711  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 13:47:37.807228  499972 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 13:47:37.807258  499972 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 13:47:37.890582  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 13:47:37.899551  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 13:47:37.907308  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 13:47:37.910119  499972 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 13:47:37.910223  499972 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 13:47:38.090196  499972 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 13:47:38.090315  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 13:47:38.102052  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:47:38.105660  499972 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 13:47:38.105707  499972 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 13:47:38.112905  499972 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 13:47:38.112945  499972 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 13:47:38.310384  499972 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 13:47:38.310417  499972 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 13:47:38.490773  499972 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:47:38.490801  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 13:47:38.509332  499972 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 13:47:38.509428  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 13:47:38.607378  499972 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 13:47:38.607487  499972 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 13:47:38.690272  499972 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 13:47:38.690374  499972 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 13:47:38.795233  499972 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 13:47:38.795282  499972 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 13:47:38.803252  499972 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 13:47:38.803362  499972 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 13:47:38.906429  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 13:47:39.098885  499972 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:47:39.098976  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 13:47:39.104700  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:47:39.106448  499972 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:47:39.106501  499972 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 13:47:39.301202  499972 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 13:47:39.301254  499972 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 13:47:39.305054  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:47:39.307931  499972 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 13:47:39.308014  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 13:47:39.495269  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:47:39.596775  499972 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.098689845s)
	I0908 13:47:39.596951  499972 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0908 13:47:39.596848  499972 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.098608883s)
	I0908 13:47:39.599217  499972 node_ready.go:35] waiting up to 6m0s for node "addons-329194" to be "Ready" ...
	I0908 13:47:39.612030  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 13:47:39.701848  499972 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 13:47:39.701938  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 13:47:40.093603  499972 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 13:47:40.093706  499972 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0908 13:47:40.396638  499972 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-329194" context rescaled to 1 replicas
	I0908 13:47:40.411267  499972 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 13:47:40.411307  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 13:47:40.789113  499972 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 13:47:40.789212  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0908 13:47:40.997229  499972 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 13:47:40.997330  499972 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 13:47:41.210125  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 13:47:41.293840  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.683556476s)
	I0908 13:47:41.293975  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.585353828s)
	I0908 13:47:41.294039  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.582863453s)
	I0908 13:47:41.294075  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.495501698s)
	I0908 13:47:41.294118  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.493346409s)
	W0908 13:47:41.694358  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:47:41.990153  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.099445935s)
	I0908 13:47:42.009452  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.109794581s)
	I0908 13:47:43.010214  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.102851412s)
	I0908 13:47:43.010264  499972 addons.go:479] Verifying addon ingress=true in "addons-329194"
	I0908 13:47:43.010290  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.908146477s)
	I0908 13:47:43.010351  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.103828366s)
	I0908 13:47:43.010370  499972 addons.go:479] Verifying addon registry=true in "addons-329194"
	I0908 13:47:43.010618  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.905868631s)
	W0908 13:47:43.010650  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:43.010671  499972 retry.go:31] will retry after 374.344941ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:43.012530  499972 out.go:179] * Verifying registry addon...
	I0908 13:47:43.012537  499972 out.go:179] * Verifying ingress addon...
	I0908 13:47:43.014951  499972 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 13:47:43.014998  499972 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 13:47:43.018073  499972 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 13:47:43.018576  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:43.018535  499972 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 13:47:43.018687  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:43.385229  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:47:43.519199  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:43.519387  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:44.018523  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:44.019361  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:47:44.102425  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:47:44.204904  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.89978572s)
	W0908 13:47:44.204975  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 13:47:44.204999  499972 retry.go:31] will retry after 355.244889ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 13:47:44.205019  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.709639446s)
	I0908 13:47:44.205076  499972 addons.go:479] Verifying addon metrics-server=true in "addons-329194"
	I0908 13:47:44.205110  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.593034507s)
	I0908 13:47:44.205367  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.995191531s)
	I0908 13:47:44.205397  499972 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-329194"
	I0908 13:47:44.206477  499972 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 13:47:44.206496  499972 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-329194 service yakd-dashboard -n yakd-dashboard
	
	I0908 13:47:44.208490  499972 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 13:47:44.213413  499972 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 13:47:44.213440  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:44.519273  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:44.519439  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:44.561126  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:47:44.614839  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.229560305s)
	W0908 13:47:44.614890  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:44.614918  499972 retry.go:31] will retry after 230.014507ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:44.712371  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:44.759970  499972 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 13:47:44.760091  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:44.780616  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:44.845331  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:47:44.905807  499972 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 13:47:44.924961  499972 addons.go:238] Setting addon gcp-auth=true in "addons-329194"
	I0908 13:47:44.925026  499972 host.go:66] Checking if "addons-329194" exists ...
	I0908 13:47:44.925460  499972 cli_runner.go:164] Run: docker container inspect addons-329194 --format={{.State.Status}}
	I0908 13:47:44.944638  499972 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 13:47:44.944703  499972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-329194
	I0908 13:47:44.963683  499972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/addons-329194/id_rsa Username:docker}
	I0908 13:47:45.018586  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:45.018644  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:45.211640  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:45.518233  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:45.518481  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:45.711906  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:46.018074  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:46.018086  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:47:46.102774  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:47:46.211821  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:46.518070  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:46.518197  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:46.712290  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:47.018658  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:47.018852  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:47.034313  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.473130575s)
	I0908 13:47:47.034414  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.189041365s)
	W0908 13:47:47.034453  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:47.034479  499972 retry.go:31] will retry after 738.701312ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:47.034476  499972 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.089817525s)
	I0908 13:47:47.036266  499972 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:47:47.037491  499972 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 13:47:47.038781  499972 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 13:47:47.038797  499972 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 13:47:47.056890  499972 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 13:47:47.056925  499972 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 13:47:47.073804  499972 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 13:47:47.073828  499972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 13:47:47.090908  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 13:47:47.211672  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:47.409440  499972 addons.go:479] Verifying addon gcp-auth=true in "addons-329194"
	I0908 13:47:47.410619  499972 out.go:179] * Verifying gcp-auth addon...
	I0908 13:47:47.412572  499972 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 13:47:47.416199  499972 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 13:47:47.416221  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:47.518120  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:47.518186  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:47.711713  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:47.773882  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:47:47.916011  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:48.018955  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:48.019097  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:47:48.103190  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:47:48.211998  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 13:47:48.316004  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:48.316046  499972 retry.go:31] will retry after 595.475609ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:48.415972  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:48.518867  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:48.519104  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:48.712312  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:48.912576  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:47:48.915964  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:49.019416  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:49.019418  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:49.212198  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:49.414980  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 13:47:49.458213  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:49.458245  499972 retry.go:31] will retry after 1.216781159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:49.518135  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:49.518398  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:49.711848  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:49.915316  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:50.018412  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:50.018423  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:50.211758  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:50.416017  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:50.518757  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:50.518936  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:47:50.602765  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:47:50.675935  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:47:50.712612  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:50.916566  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:51.017816  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:51.018001  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:51.212128  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 13:47:51.223824  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:51.223858  499972 retry.go:31] will retry after 1.374108887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:51.415938  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:51.518918  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:51.519161  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:51.711703  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:51.915475  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:52.018567  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:52.018738  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:52.212264  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:52.416703  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:52.518625  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:52.518794  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:52.598781  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:47:52.712510  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:52.916357  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:53.017897  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:53.017951  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:47:53.102073  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	W0908 13:47:53.140219  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:53.140253  499972 retry.go:31] will retry after 1.767758207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:53.212017  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:53.416198  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:53.518088  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:53.518343  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:53.711752  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:53.915746  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:54.018946  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:54.019184  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:54.212153  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:54.416042  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:54.517937  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:54.518134  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:54.711963  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:54.908247  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:47:54.916213  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:55.018639  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:55.018717  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:47:55.102656  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:47:55.211379  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:55.415663  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 13:47:55.447553  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:55.447593  499972 retry.go:31] will retry after 3.090664167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:55.518630  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:55.518823  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:55.714522  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:55.916542  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:56.018494  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:56.018728  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:56.212171  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:56.416172  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:56.517769  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:56.517938  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:56.711994  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:56.916511  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:57.018212  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:57.018390  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:47:57.103171  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:47:57.212373  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:57.416378  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:57.518099  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:57.518261  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:57.711813  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:57.915812  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:58.018849  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:58.018893  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:58.211260  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:58.416128  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:58.517733  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:58.517931  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:47:58.538950  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:47:58.712170  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:58.916547  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:59.018298  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:59.018464  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:47:59.090166  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:59.090216  499972 retry.go:31] will retry after 5.619473005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:47:59.211578  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:59.416285  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:47:59.518040  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:47:59.518130  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:47:59.602591  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:47:59.711499  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:47:59.916398  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:00.018148  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:00.018371  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:00.211587  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:00.416611  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:00.518371  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:00.518494  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:00.711863  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:00.916249  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:01.018098  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:01.018198  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:01.211676  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:01.415275  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:01.517997  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:01.518125  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:48:01.602688  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:48:01.711481  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:01.916294  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:02.018100  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:02.018268  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:02.211972  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:02.415647  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:02.518733  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:02.518805  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:02.711555  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:02.916707  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:03.018364  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:03.018564  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:03.212062  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:03.415821  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:03.518914  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:03.518924  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:03.711303  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:03.916251  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:04.018011  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:04.018082  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:48:04.103012  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:48:04.211648  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:04.416302  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:04.518317  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:04.518572  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:04.710437  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:48:04.712322  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:04.916388  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:05.017931  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:05.017988  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:05.212173  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 13:48:05.254314  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:48:05.254346  499972 retry.go:31] will retry after 13.909331469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:48:05.416071  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:05.517756  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:05.517946  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:05.711522  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:05.916244  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:06.018111  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:06.018137  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:06.211326  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:06.416366  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:06.518100  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:06.518167  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:48:06.602861  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:48:06.711554  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:06.915849  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:07.018560  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:07.018717  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:07.211729  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:07.415627  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:07.518370  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:07.518537  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:07.712125  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:07.915980  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:08.018799  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:08.018871  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:08.211468  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:08.416194  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:08.518113  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:08.518144  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:08.711424  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:08.916275  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:09.018059  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:09.018094  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:48:09.102772  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:48:09.211527  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:09.416356  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:09.518097  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:09.518145  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:09.711525  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:09.916192  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:10.018182  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:10.018233  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:10.211968  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:10.415912  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:10.519255  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:10.519274  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:10.711837  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:10.915958  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:11.018784  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:11.018861  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:11.212277  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:11.416246  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:11.517933  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:11.518164  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:48:11.602811  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:48:11.711609  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:11.916263  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:12.018370  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:12.018557  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:12.211991  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:12.416095  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:12.518176  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:12.518239  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:12.711565  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:12.916310  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:13.018188  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:13.018361  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:13.211672  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:13.415563  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:13.518494  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:13.518644  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:13.712264  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:13.916222  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:14.017901  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:14.018048  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:48:14.102766  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:48:14.211726  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:14.415580  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:14.518437  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:14.518491  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:14.711709  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:14.915522  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:15.018237  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:15.018396  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:15.211891  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:15.415770  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:15.518782  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:15.518983  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:15.712435  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:15.916385  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:16.018224  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:16.018370  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:48:16.103396  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:48:16.212328  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:16.416635  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:16.518565  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:16.518710  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:16.712536  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:16.916239  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:17.018096  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:17.018239  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:17.212416  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:17.416410  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:17.518568  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:17.518607  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:17.712220  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:17.916257  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:18.018232  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:18.018401  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:18.212130  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:18.416341  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:18.518134  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:18.518359  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:48:18.603023  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:48:18.712061  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:18.915948  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:19.018915  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:19.019062  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:19.164097  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:48:19.212450  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:19.416719  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:19.518684  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:19.518978  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:48:19.706062  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:48:19.706109  499972 retry.go:31] will retry after 17.638708214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:48:19.712164  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:19.915574  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:20.018474  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:20.018695  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:20.212066  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:20.416003  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:20.518794  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:20.518882  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:20.712212  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:20.916433  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:21.018460  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:21.018554  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:48:21.102322  499972 node_ready.go:57] node "addons-329194" has "Ready":"False" status (will retry)
	I0908 13:48:21.212248  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:21.416329  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:21.518199  499972 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 13:48:21.518227  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:21.520209  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:21.602692  499972 node_ready.go:49] node "addons-329194" is "Ready"
	I0908 13:48:21.602797  499972 node_ready.go:38] duration metric: took 42.00340648s for node "addons-329194" to be "Ready" ...
	I0908 13:48:21.602839  499972 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:48:21.602925  499972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:48:21.618417  499972 api_server.go:72] duration metric: took 44.516388417s to wait for apiserver process to appear ...
	I0908 13:48:21.618548  499972 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:48:21.618581  499972 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0908 13:48:21.623060  499972 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0908 13:48:21.623982  499972 api_server.go:141] control plane version: v1.34.0
	I0908 13:48:21.624016  499972 api_server.go:131] duration metric: took 5.454822ms to wait for apiserver health ...
	I0908 13:48:21.624031  499972 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:48:21.627843  499972 system_pods.go:59] 20 kube-system pods found
	I0908 13:48:21.627874  499972 system_pods.go:61] "amd-gpu-device-plugin-j5795" [c99125c8-db90-424e-9eb6-12be65680109] Pending
	I0908 13:48:21.627884  499972 system_pods.go:61] "coredns-66bc5c9577-rsqn5" [b88b7cd5-a636-4218-99d3-86a7b035e7f8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:48:21.627895  499972 system_pods.go:61] "csi-hostpath-attacher-0" [525871fc-12b0-4524-a1ce-d90ba16e7e2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:48:21.627901  499972 system_pods.go:61] "csi-hostpath-resizer-0" [54a2457c-958d-493c-abec-e036ddab2e51] Pending
	I0908 13:48:21.627905  499972 system_pods.go:61] "csi-hostpathplugin-rxnrg" [a6e476fc-d461-4806-ba6f-ac707013a9e3] Pending
	I0908 13:48:21.627909  499972 system_pods.go:61] "etcd-addons-329194" [826b8b0b-e939-44ca-9c0a-7d8562b78cc4] Running
	I0908 13:48:21.627912  499972 system_pods.go:61] "kindnet-vmdkv" [0318c320-d7d1-4987-8ba8-ab921e5de9a2] Running
	I0908 13:48:21.627916  499972 system_pods.go:61] "kube-apiserver-addons-329194" [5c061cf6-305a-40b3-9c68-bf572cfed482] Running
	I0908 13:48:21.627919  499972 system_pods.go:61] "kube-controller-manager-addons-329194" [d8f41399-2c57-4682-96c8-74f8aee09fe3] Running
	I0908 13:48:21.627924  499972 system_pods.go:61] "kube-ingress-dns-minikube" [e22e1598-5ba4-44ee-a0f6-2f00b6101357] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:48:21.627928  499972 system_pods.go:61] "kube-proxy-bnskb" [bbefa216-6b78-4066-857f-873b6c0b9583] Running
	I0908 13:48:21.627932  499972 system_pods.go:61] "kube-scheduler-addons-329194" [3eb22fd7-189b-4223-b6fe-4cb741de8997] Running
	I0908 13:48:21.627936  499972 system_pods.go:61] "metrics-server-85b7d694d7-l4dd8" [3f7d5632-6df9-4dd4-98cf-8cb0c9a7d9de] Pending
	I0908 13:48:21.627941  499972 system_pods.go:61] "nvidia-device-plugin-daemonset-85mff" [2d417a4a-cecf-44d0-acc0-41d50573d7b3] Pending
	I0908 13:48:21.627949  499972 system_pods.go:61] "registry-66898fdd98-p964w" [6d73c2d9-766f-4810-8470-49c8eb663237] Pending
	I0908 13:48:21.627954  499972 system_pods.go:61] "registry-creds-764b6fb674-hv568" [b28b443b-33a8-4fec-a226-4b41d6c82b2b] Pending
	I0908 13:48:21.627965  499972 system_pods.go:61] "registry-proxy-49b5f" [062d62ef-0513-48e8-b537-0bea42dfb5c1] Pending
	I0908 13:48:21.627970  499972 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jk868" [b1ceb4f9-1f17-46d6-962f-d05ab8576171] Pending
	I0908 13:48:21.627978  499972 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qq84z" [2f1b4252-fe05-4a62-b0ad-8f0d1b8f6bd4] Pending
	I0908 13:48:21.627984  499972 system_pods.go:61] "storage-provisioner" [12c5bc4c-68da-497b-b026-b315aedd1b9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:48:21.627993  499972 system_pods.go:74] duration metric: took 3.954521ms to wait for pod list to return data ...
	I0908 13:48:21.628007  499972 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:48:21.693155  499972 default_sa.go:45] found service account: "default"
	I0908 13:48:21.693199  499972 default_sa.go:55] duration metric: took 65.182481ms for default service account to be created ...
	I0908 13:48:21.693221  499972 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:48:21.711793  499972 system_pods.go:86] 20 kube-system pods found
	I0908 13:48:21.711845  499972 system_pods.go:89] "amd-gpu-device-plugin-j5795" [c99125c8-db90-424e-9eb6-12be65680109] Pending
	I0908 13:48:21.711859  499972 system_pods.go:89] "coredns-66bc5c9577-rsqn5" [b88b7cd5-a636-4218-99d3-86a7b035e7f8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:48:21.711872  499972 system_pods.go:89] "csi-hostpath-attacher-0" [525871fc-12b0-4524-a1ce-d90ba16e7e2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:48:21.711882  499972 system_pods.go:89] "csi-hostpath-resizer-0" [54a2457c-958d-493c-abec-e036ddab2e51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:48:21.711891  499972 system_pods.go:89] "csi-hostpathplugin-rxnrg" [a6e476fc-d461-4806-ba6f-ac707013a9e3] Pending
	I0908 13:48:21.711900  499972 system_pods.go:89] "etcd-addons-329194" [826b8b0b-e939-44ca-9c0a-7d8562b78cc4] Running
	I0908 13:48:21.711910  499972 system_pods.go:89] "kindnet-vmdkv" [0318c320-d7d1-4987-8ba8-ab921e5de9a2] Running
	I0908 13:48:21.711915  499972 system_pods.go:89] "kube-apiserver-addons-329194" [5c061cf6-305a-40b3-9c68-bf572cfed482] Running
	I0908 13:48:21.711924  499972 system_pods.go:89] "kube-controller-manager-addons-329194" [d8f41399-2c57-4682-96c8-74f8aee09fe3] Running
	I0908 13:48:21.711934  499972 system_pods.go:89] "kube-ingress-dns-minikube" [e22e1598-5ba4-44ee-a0f6-2f00b6101357] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:48:21.711942  499972 system_pods.go:89] "kube-proxy-bnskb" [bbefa216-6b78-4066-857f-873b6c0b9583] Running
	I0908 13:48:21.711948  499972 system_pods.go:89] "kube-scheduler-addons-329194" [3eb22fd7-189b-4223-b6fe-4cb741de8997] Running
	I0908 13:48:21.711955  499972 system_pods.go:89] "metrics-server-85b7d694d7-l4dd8" [3f7d5632-6df9-4dd4-98cf-8cb0c9a7d9de] Pending
	I0908 13:48:21.711961  499972 system_pods.go:89] "nvidia-device-plugin-daemonset-85mff" [2d417a4a-cecf-44d0-acc0-41d50573d7b3] Pending
	I0908 13:48:21.711966  499972 system_pods.go:89] "registry-66898fdd98-p964w" [6d73c2d9-766f-4810-8470-49c8eb663237] Pending
	I0908 13:48:21.711971  499972 system_pods.go:89] "registry-creds-764b6fb674-hv568" [b28b443b-33a8-4fec-a226-4b41d6c82b2b] Pending
	I0908 13:48:21.711978  499972 system_pods.go:89] "registry-proxy-49b5f" [062d62ef-0513-48e8-b537-0bea42dfb5c1] Pending
	I0908 13:48:21.711984  499972 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jk868" [b1ceb4f9-1f17-46d6-962f-d05ab8576171] Pending
	I0908 13:48:21.711993  499972 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qq84z" [2f1b4252-fe05-4a62-b0ad-8f0d1b8f6bd4] Pending
	I0908 13:48:21.712000  499972 system_pods.go:89] "storage-provisioner" [12c5bc4c-68da-497b-b026-b315aedd1b9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:48:21.712023  499972 retry.go:31] will retry after 276.316886ms: missing components: kube-dns
	I0908 13:48:21.792065  499972 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 13:48:21.792151  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:21.916826  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:21.994295  499972 system_pods.go:86] 20 kube-system pods found
	I0908 13:48:21.994330  499972 system_pods.go:89] "amd-gpu-device-plugin-j5795" [c99125c8-db90-424e-9eb6-12be65680109] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 13:48:21.994337  499972 system_pods.go:89] "coredns-66bc5c9577-rsqn5" [b88b7cd5-a636-4218-99d3-86a7b035e7f8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:48:21.994345  499972 system_pods.go:89] "csi-hostpath-attacher-0" [525871fc-12b0-4524-a1ce-d90ba16e7e2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:48:21.994350  499972 system_pods.go:89] "csi-hostpath-resizer-0" [54a2457c-958d-493c-abec-e036ddab2e51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:48:21.994356  499972 system_pods.go:89] "csi-hostpathplugin-rxnrg" [a6e476fc-d461-4806-ba6f-ac707013a9e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 13:48:21.994359  499972 system_pods.go:89] "etcd-addons-329194" [826b8b0b-e939-44ca-9c0a-7d8562b78cc4] Running
	I0908 13:48:21.994363  499972 system_pods.go:89] "kindnet-vmdkv" [0318c320-d7d1-4987-8ba8-ab921e5de9a2] Running
	I0908 13:48:21.994367  499972 system_pods.go:89] "kube-apiserver-addons-329194" [5c061cf6-305a-40b3-9c68-bf572cfed482] Running
	I0908 13:48:21.994372  499972 system_pods.go:89] "kube-controller-manager-addons-329194" [d8f41399-2c57-4682-96c8-74f8aee09fe3] Running
	I0908 13:48:21.994380  499972 system_pods.go:89] "kube-ingress-dns-minikube" [e22e1598-5ba4-44ee-a0f6-2f00b6101357] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:48:21.994385  499972 system_pods.go:89] "kube-proxy-bnskb" [bbefa216-6b78-4066-857f-873b6c0b9583] Running
	I0908 13:48:21.994391  499972 system_pods.go:89] "kube-scheduler-addons-329194" [3eb22fd7-189b-4223-b6fe-4cb741de8997] Running
	I0908 13:48:21.994395  499972 system_pods.go:89] "metrics-server-85b7d694d7-l4dd8" [3f7d5632-6df9-4dd4-98cf-8cb0c9a7d9de] Pending
	I0908 13:48:21.994405  499972 system_pods.go:89] "nvidia-device-plugin-daemonset-85mff" [2d417a4a-cecf-44d0-acc0-41d50573d7b3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 13:48:21.994417  499972 system_pods.go:89] "registry-66898fdd98-p964w" [6d73c2d9-766f-4810-8470-49c8eb663237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:48:21.994425  499972 system_pods.go:89] "registry-creds-764b6fb674-hv568" [b28b443b-33a8-4fec-a226-4b41d6c82b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:48:21.994433  499972 system_pods.go:89] "registry-proxy-49b5f" [062d62ef-0513-48e8-b537-0bea42dfb5c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:48:21.994439  499972 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jk868" [b1ceb4f9-1f17-46d6-962f-d05ab8576171] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:48:21.994448  499972 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qq84z" [2f1b4252-fe05-4a62-b0ad-8f0d1b8f6bd4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:48:21.994456  499972 system_pods.go:89] "storage-provisioner" [12c5bc4c-68da-497b-b026-b315aedd1b9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:48:21.994474  499972 retry.go:31] will retry after 363.494982ms: missing components: kube-dns
	I0908 13:48:22.018305  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:22.018422  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:22.212268  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:22.396014  499972 system_pods.go:86] 20 kube-system pods found
	I0908 13:48:22.396062  499972 system_pods.go:89] "amd-gpu-device-plugin-j5795" [c99125c8-db90-424e-9eb6-12be65680109] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 13:48:22.396075  499972 system_pods.go:89] "coredns-66bc5c9577-rsqn5" [b88b7cd5-a636-4218-99d3-86a7b035e7f8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:48:22.396085  499972 system_pods.go:89] "csi-hostpath-attacher-0" [525871fc-12b0-4524-a1ce-d90ba16e7e2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:48:22.396094  499972 system_pods.go:89] "csi-hostpath-resizer-0" [54a2457c-958d-493c-abec-e036ddab2e51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:48:22.396101  499972 system_pods.go:89] "csi-hostpathplugin-rxnrg" [a6e476fc-d461-4806-ba6f-ac707013a9e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 13:48:22.396109  499972 system_pods.go:89] "etcd-addons-329194" [826b8b0b-e939-44ca-9c0a-7d8562b78cc4] Running
	I0908 13:48:22.396116  499972 system_pods.go:89] "kindnet-vmdkv" [0318c320-d7d1-4987-8ba8-ab921e5de9a2] Running
	I0908 13:48:22.396133  499972 system_pods.go:89] "kube-apiserver-addons-329194" [5c061cf6-305a-40b3-9c68-bf572cfed482] Running
	I0908 13:48:22.396139  499972 system_pods.go:89] "kube-controller-manager-addons-329194" [d8f41399-2c57-4682-96c8-74f8aee09fe3] Running
	I0908 13:48:22.396147  499972 system_pods.go:89] "kube-ingress-dns-minikube" [e22e1598-5ba4-44ee-a0f6-2f00b6101357] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:48:22.396156  499972 system_pods.go:89] "kube-proxy-bnskb" [bbefa216-6b78-4066-857f-873b6c0b9583] Running
	I0908 13:48:22.396162  499972 system_pods.go:89] "kube-scheduler-addons-329194" [3eb22fd7-189b-4223-b6fe-4cb741de8997] Running
	I0908 13:48:22.396173  499972 system_pods.go:89] "metrics-server-85b7d694d7-l4dd8" [3f7d5632-6df9-4dd4-98cf-8cb0c9a7d9de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:48:22.396185  499972 system_pods.go:89] "nvidia-device-plugin-daemonset-85mff" [2d417a4a-cecf-44d0-acc0-41d50573d7b3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 13:48:22.396196  499972 system_pods.go:89] "registry-66898fdd98-p964w" [6d73c2d9-766f-4810-8470-49c8eb663237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:48:22.396204  499972 system_pods.go:89] "registry-creds-764b6fb674-hv568" [b28b443b-33a8-4fec-a226-4b41d6c82b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:48:22.396211  499972 system_pods.go:89] "registry-proxy-49b5f" [062d62ef-0513-48e8-b537-0bea42dfb5c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:48:22.396253  499972 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jk868" [b1ceb4f9-1f17-46d6-962f-d05ab8576171] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:48:22.396277  499972 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qq84z" [2f1b4252-fe05-4a62-b0ad-8f0d1b8f6bd4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:48:22.396286  499972 system_pods.go:89] "storage-provisioner" [12c5bc4c-68da-497b-b026-b315aedd1b9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:48:22.396307  499972 retry.go:31] will retry after 478.118915ms: missing components: kube-dns
	I0908 13:48:22.493993  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:22.594985  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:22.595028  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:22.712039  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:22.879427  499972 system_pods.go:86] 20 kube-system pods found
	I0908 13:48:22.879463  499972 system_pods.go:89] "amd-gpu-device-plugin-j5795" [c99125c8-db90-424e-9eb6-12be65680109] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 13:48:22.879471  499972 system_pods.go:89] "coredns-66bc5c9577-rsqn5" [b88b7cd5-a636-4218-99d3-86a7b035e7f8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:48:22.879478  499972 system_pods.go:89] "csi-hostpath-attacher-0" [525871fc-12b0-4524-a1ce-d90ba16e7e2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:48:22.879483  499972 system_pods.go:89] "csi-hostpath-resizer-0" [54a2457c-958d-493c-abec-e036ddab2e51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:48:22.879488  499972 system_pods.go:89] "csi-hostpathplugin-rxnrg" [a6e476fc-d461-4806-ba6f-ac707013a9e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 13:48:22.879492  499972 system_pods.go:89] "etcd-addons-329194" [826b8b0b-e939-44ca-9c0a-7d8562b78cc4] Running
	I0908 13:48:22.879496  499972 system_pods.go:89] "kindnet-vmdkv" [0318c320-d7d1-4987-8ba8-ab921e5de9a2] Running
	I0908 13:48:22.879499  499972 system_pods.go:89] "kube-apiserver-addons-329194" [5c061cf6-305a-40b3-9c68-bf572cfed482] Running
	I0908 13:48:22.879502  499972 system_pods.go:89] "kube-controller-manager-addons-329194" [d8f41399-2c57-4682-96c8-74f8aee09fe3] Running
	I0908 13:48:22.879507  499972 system_pods.go:89] "kube-ingress-dns-minikube" [e22e1598-5ba4-44ee-a0f6-2f00b6101357] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:48:22.879510  499972 system_pods.go:89] "kube-proxy-bnskb" [bbefa216-6b78-4066-857f-873b6c0b9583] Running
	I0908 13:48:22.879514  499972 system_pods.go:89] "kube-scheduler-addons-329194" [3eb22fd7-189b-4223-b6fe-4cb741de8997] Running
	I0908 13:48:22.879519  499972 system_pods.go:89] "metrics-server-85b7d694d7-l4dd8" [3f7d5632-6df9-4dd4-98cf-8cb0c9a7d9de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:48:22.879528  499972 system_pods.go:89] "nvidia-device-plugin-daemonset-85mff" [2d417a4a-cecf-44d0-acc0-41d50573d7b3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 13:48:22.879533  499972 system_pods.go:89] "registry-66898fdd98-p964w" [6d73c2d9-766f-4810-8470-49c8eb663237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:48:22.879539  499972 system_pods.go:89] "registry-creds-764b6fb674-hv568" [b28b443b-33a8-4fec-a226-4b41d6c82b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:48:22.879553  499972 system_pods.go:89] "registry-proxy-49b5f" [062d62ef-0513-48e8-b537-0bea42dfb5c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:48:22.879561  499972 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jk868" [b1ceb4f9-1f17-46d6-962f-d05ab8576171] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:48:22.879569  499972 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qq84z" [2f1b4252-fe05-4a62-b0ad-8f0d1b8f6bd4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:48:22.879574  499972 system_pods.go:89] "storage-provisioner" [12c5bc4c-68da-497b-b026-b315aedd1b9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:48:22.879593  499972 retry.go:31] will retry after 526.45449ms: missing components: kube-dns
	I0908 13:48:22.915212  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:23.018132  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:23.018669  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:23.212550  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:23.412179  499972 system_pods.go:86] 20 kube-system pods found
	I0908 13:48:23.412219  499972 system_pods.go:89] "amd-gpu-device-plugin-j5795" [c99125c8-db90-424e-9eb6-12be65680109] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 13:48:23.412228  499972 system_pods.go:89] "coredns-66bc5c9577-rsqn5" [b88b7cd5-a636-4218-99d3-86a7b035e7f8] Running
	I0908 13:48:23.412238  499972 system_pods.go:89] "csi-hostpath-attacher-0" [525871fc-12b0-4524-a1ce-d90ba16e7e2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:48:23.412247  499972 system_pods.go:89] "csi-hostpath-resizer-0" [54a2457c-958d-493c-abec-e036ddab2e51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:48:23.412255  499972 system_pods.go:89] "csi-hostpathplugin-rxnrg" [a6e476fc-d461-4806-ba6f-ac707013a9e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 13:48:23.412266  499972 system_pods.go:89] "etcd-addons-329194" [826b8b0b-e939-44ca-9c0a-7d8562b78cc4] Running
	I0908 13:48:23.412272  499972 system_pods.go:89] "kindnet-vmdkv" [0318c320-d7d1-4987-8ba8-ab921e5de9a2] Running
	I0908 13:48:23.412282  499972 system_pods.go:89] "kube-apiserver-addons-329194" [5c061cf6-305a-40b3-9c68-bf572cfed482] Running
	I0908 13:48:23.412289  499972 system_pods.go:89] "kube-controller-manager-addons-329194" [d8f41399-2c57-4682-96c8-74f8aee09fe3] Running
	I0908 13:48:23.412300  499972 system_pods.go:89] "kube-ingress-dns-minikube" [e22e1598-5ba4-44ee-a0f6-2f00b6101357] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:48:23.412305  499972 system_pods.go:89] "kube-proxy-bnskb" [bbefa216-6b78-4066-857f-873b6c0b9583] Running
	I0908 13:48:23.412311  499972 system_pods.go:89] "kube-scheduler-addons-329194" [3eb22fd7-189b-4223-b6fe-4cb741de8997] Running
	I0908 13:48:23.412318  499972 system_pods.go:89] "metrics-server-85b7d694d7-l4dd8" [3f7d5632-6df9-4dd4-98cf-8cb0c9a7d9de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:48:23.412327  499972 system_pods.go:89] "nvidia-device-plugin-daemonset-85mff" [2d417a4a-cecf-44d0-acc0-41d50573d7b3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 13:48:23.412338  499972 system_pods.go:89] "registry-66898fdd98-p964w" [6d73c2d9-766f-4810-8470-49c8eb663237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:48:23.412346  499972 system_pods.go:89] "registry-creds-764b6fb674-hv568" [b28b443b-33a8-4fec-a226-4b41d6c82b2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:48:23.412356  499972 system_pods.go:89] "registry-proxy-49b5f" [062d62ef-0513-48e8-b537-0bea42dfb5c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:48:23.412366  499972 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jk868" [b1ceb4f9-1f17-46d6-962f-d05ab8576171] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:48:23.412374  499972 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qq84z" [2f1b4252-fe05-4a62-b0ad-8f0d1b8f6bd4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:48:23.412382  499972 system_pods.go:89] "storage-provisioner" [12c5bc4c-68da-497b-b026-b315aedd1b9b] Running
	I0908 13:48:23.412394  499972 system_pods.go:126] duration metric: took 1.719164398s to wait for k8s-apps to be running ...
	I0908 13:48:23.412408  499972 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 13:48:23.412491  499972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:48:23.415433  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:23.426623  499972 system_svc.go:56] duration metric: took 14.203328ms WaitForService to wait for kubelet
	I0908 13:48:23.426665  499972 kubeadm.go:578] duration metric: took 46.32464501s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:48:23.426692  499972 node_conditions.go:102] verifying NodePressure condition ...
	I0908 13:48:23.429956  499972 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 13:48:23.429990  499972 node_conditions.go:123] node cpu capacity is 8
	I0908 13:48:23.430005  499972 node_conditions.go:105] duration metric: took 3.308073ms to run NodePressure ...
	I0908 13:48:23.430019  499972 start.go:241] waiting for startup goroutines ...
	I0908 13:48:23.519328  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:23.520138  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:23.711987  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:23.916384  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:24.018498  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:24.018536  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:24.212995  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:24.415823  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:24.519394  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:24.519604  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:24.712929  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:24.915963  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:25.019484  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:25.019541  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:25.213180  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:25.416583  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:25.518473  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:25.518516  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:25.713039  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:25.916354  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:26.018393  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:26.018634  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:26.212808  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:26.415857  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:26.518596  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:26.518866  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:26.713492  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:26.916356  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:27.018351  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:27.018481  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:27.212846  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:27.415846  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:27.519465  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:27.520183  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:27.711768  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:27.915558  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:28.018693  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:28.018743  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:28.212692  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:28.416951  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:28.519114  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:28.519327  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:28.712958  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:28.916067  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:29.021005  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:29.021128  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:29.212095  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:29.416321  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:29.518088  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:29.518283  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:29.712110  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:29.916043  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:30.017981  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:30.018147  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:30.212390  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:30.416587  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:30.518890  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:30.519119  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:30.712326  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:30.916033  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:31.017929  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:31.018055  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:31.212188  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:31.416038  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:31.518542  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:31.519075  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:31.711688  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:31.916610  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:32.018624  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:32.018656  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:32.211716  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:32.415678  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:32.519315  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:32.519518  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:32.713673  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:32.916480  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:33.019101  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:33.019148  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:33.212663  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:33.416693  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:33.518609  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:33.518871  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:33.711850  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:33.915556  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:34.018703  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:34.018765  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:34.212564  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:34.416604  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:34.518921  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:34.518968  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:34.713079  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:34.915875  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:35.018748  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:35.018831  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:35.212148  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:35.416074  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:35.517956  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:35.518148  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:35.712198  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:35.916130  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:36.018166  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:36.018330  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:36.213413  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:36.416924  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:36.591778  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:36.592307  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:36.713352  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:36.916559  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:37.018697  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:37.019375  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:37.212128  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:37.346025  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:48:37.416877  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:37.519527  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:37.519695  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:37.713734  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:37.916800  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:38.090791  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:38.091340  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:38.212713  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:38.415431  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:38.519302  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:38.519704  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:38.712803  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:38.916374  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:39.018395  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:39.018505  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:39.212772  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:39.228570  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.882493974s)
	W0908 13:48:39.228626  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:48:39.228651  499972 retry.go:31] will retry after 30.759551659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:48:39.415915  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:39.518979  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:39.519193  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:39.712783  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:39.916321  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:40.018438  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:40.018449  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:40.212866  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:40.415721  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:40.518864  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:40.518927  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:40.712075  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:40.916376  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:41.018618  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:41.018788  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:41.212773  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:41.416060  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:41.518649  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:41.518660  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:41.711982  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:41.916051  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:42.018550  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:42.018636  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:42.212660  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:42.416883  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:42.518928  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:42.518994  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:42.711706  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:42.915913  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:43.018741  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:43.018796  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:43.212144  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:43.416515  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:43.518579  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:43.518614  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:43.711629  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:43.916867  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:44.019427  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:44.020017  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:44.212409  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:44.416407  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:44.518796  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:44.518931  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:44.711588  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:44.919295  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:45.018268  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:45.018308  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:45.212802  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:45.415651  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:45.518834  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:45.518846  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:45.711958  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:45.915926  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:46.018866  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:46.018958  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:46.212041  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:46.416561  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:46.518922  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:46.518954  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:46.712494  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:46.992450  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:47.091266  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:47.092008  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:47.213063  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:47.496169  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:47.594704  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:47.595108  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:47.712561  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:47.991864  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:48.092854  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:48.092883  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:48.212690  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:48.493124  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:48.590949  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:48.591043  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:48.713032  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:48.916696  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:49.019078  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:49.019126  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:49.212976  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:49.415719  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:49.519579  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:49.519741  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:49.712890  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:49.915917  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:50.019150  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:50.019254  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:50.212752  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:50.416525  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:50.518939  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:50.519041  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:50.712353  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:50.916827  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:51.019050  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:51.019229  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:51.212763  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:51.416583  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:51.518812  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:51.519107  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:51.712264  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:51.916270  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:52.018389  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:52.018545  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:52.212212  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:52.416599  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:52.519248  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:52.519326  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:52.712432  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:52.916856  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:53.018855  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:53.019039  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:53.212238  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:53.416426  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:53.519144  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:53.519202  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:53.712555  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:53.916308  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:54.018438  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:54.018517  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:54.211687  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:54.415664  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:54.518922  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:54.519289  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:54.712631  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:54.917021  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:55.018302  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:55.018322  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:55.213085  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:55.416148  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:55.518418  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:55.518537  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:55.712837  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:55.915612  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:56.018751  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:56.018751  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:56.212357  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:56.417314  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:56.519053  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:56.519232  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:56.712325  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:56.945898  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:57.019572  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:57.019763  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:57.212369  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:57.416610  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:57.518658  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:57.518659  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:57.711663  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:57.916290  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:58.018673  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:58.018684  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:58.211800  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:58.416228  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:58.518223  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:58.518644  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:58.711665  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:58.916564  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:59.019484  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:59.019617  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:59.213187  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:59.416218  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:48:59.518504  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:48:59.518559  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:48:59.712537  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:48:59.915986  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:00.019159  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:00.019364  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:00.212714  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:00.417197  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:00.518405  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:00.518470  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:00.713273  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:00.916470  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:01.018359  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:01.018485  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:01.212752  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:01.415772  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:01.518746  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:01.518839  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:01.711967  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:01.915908  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:02.019014  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:02.019057  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:02.212227  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:02.416040  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:02.517969  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:02.518060  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:02.712445  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:02.916700  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:03.018967  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:03.019035  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:03.212072  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:03.493169  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:03.519058  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:03.519183  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:03.713895  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:03.916193  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:04.018561  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:04.018628  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:04.213097  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:04.415872  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:04.519615  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:04.519976  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:04.712842  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:04.916705  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:05.020088  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:05.020285  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:05.211820  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:05.415738  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:05.518955  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:05.519013  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:05.712542  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:05.916347  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:06.018243  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:49:06.018311  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:06.212575  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:06.416904  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:06.518835  499972 kapi.go:107] duration metric: took 1m23.503833683s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 13:49:06.518954  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:06.712150  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:06.916867  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:07.019314  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:07.211510  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:07.416707  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:07.519058  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:07.712251  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:07.916161  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:08.018379  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:08.212563  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:08.416365  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:08.518542  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:08.711391  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:08.916354  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:09.018412  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:09.212199  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:09.416058  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:09.518057  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:09.712697  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:09.915380  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:09.988411  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:49:10.018988  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:10.212188  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:10.416157  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:10.518230  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:49:10.534343  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:49:10.534381  499972 retry.go:31] will retry after 20.239753113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:49:10.712609  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:10.916835  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:11.018675  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:11.213313  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:11.416222  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:11.518348  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:11.712747  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:11.916661  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:12.019073  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:12.212346  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:12.416224  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:12.518535  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:12.712703  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:12.916440  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:13.018604  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:13.213153  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:13.474422  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:13.518587  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:13.712682  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:13.927844  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:14.018920  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:14.211666  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:14.416841  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:14.518904  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:14.712538  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:14.916344  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:15.018392  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:15.212935  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:15.416117  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:15.518394  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:15.712896  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:15.916284  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:16.018506  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:16.211268  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:16.416342  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:16.518506  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:16.713506  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:16.916706  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:17.019182  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:17.213821  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:17.417644  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:17.518880  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:17.712135  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:17.916718  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:18.019038  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:18.214866  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:18.415613  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:18.590821  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:18.712197  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:18.916728  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:19.019309  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:19.212781  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:19.415600  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:19.518492  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:19.712968  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:19.916061  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:20.021987  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:20.212759  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:20.416936  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:20.518997  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:20.712554  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:20.916777  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:21.019009  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:21.212743  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:21.417399  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:21.518869  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:21.712447  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:21.916615  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:22.018549  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:22.211590  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:22.416695  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:22.519377  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:22.713057  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:22.916264  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:23.018703  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:23.211920  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:49:23.415824  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:23.519292  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:23.712760  499972 kapi.go:107] duration metric: took 1m39.504271241s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 13:49:23.916518  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:24.018866  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:24.415900  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:24.519921  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:24.916138  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:25.018202  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:25.416629  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:25.519423  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:25.916151  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:26.018009  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:26.491855  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:26.594681  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:26.916033  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:27.091552  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:27.490924  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:27.592549  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:27.990319  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:28.092349  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:28.417205  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:28.590478  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:28.916248  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:29.018410  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:29.416745  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:29.518980  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:29.916704  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:30.018783  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:30.416190  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:30.518314  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:30.774351  499972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:49:31.012757  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:31.018687  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:31.417058  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:31.518593  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:31.895001  499972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.120599323s)
	W0908 13:49:31.895145  499972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 13:49:31.895393  499972 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 13:49:31.916252  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:32.018778  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:32.416327  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:32.518381  499972 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:49:32.915902  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:33.019397  499972 kapi.go:107] duration metric: took 1m50.004445375s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 13:49:33.417162  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:33.997239  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:34.416734  499972 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:49:34.916274  499972 kapi.go:107] duration metric: took 1m47.503700187s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 13:49:34.918058  499972 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-329194 cluster.
	I0908 13:49:34.919404  499972 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 13:49:34.920676  499972 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 13:49:34.921974  499972 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, default-storageclass, ingress-dns, storage-provisioner-rancher, storage-provisioner, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0908 13:49:34.923163  499972 addons.go:514] duration metric: took 1m57.821096472s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin registry-creds default-storageclass ingress-dns storage-provisioner-rancher storage-provisioner metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0908 13:49:34.923228  499972 start.go:246] waiting for cluster config update ...
	I0908 13:49:34.923257  499972 start.go:255] writing updated cluster config ...
	I0908 13:49:34.923538  499972 ssh_runner.go:195] Run: rm -f paused
	I0908 13:49:34.927247  499972 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:49:34.930913  499972 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rsqn5" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:34.935161  499972 pod_ready.go:94] pod "coredns-66bc5c9577-rsqn5" is "Ready"
	I0908 13:49:34.935187  499972 pod_ready.go:86] duration metric: took 4.249104ms for pod "coredns-66bc5c9577-rsqn5" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:34.937107  499972 pod_ready.go:83] waiting for pod "etcd-addons-329194" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:34.941001  499972 pod_ready.go:94] pod "etcd-addons-329194" is "Ready"
	I0908 13:49:34.941026  499972 pod_ready.go:86] duration metric: took 3.897376ms for pod "etcd-addons-329194" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:34.942907  499972 pod_ready.go:83] waiting for pod "kube-apiserver-addons-329194" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:34.946326  499972 pod_ready.go:94] pod "kube-apiserver-addons-329194" is "Ready"
	I0908 13:49:34.946348  499972 pod_ready.go:86] duration metric: took 3.420869ms for pod "kube-apiserver-addons-329194" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:34.948164  499972 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-329194" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:35.331391  499972 pod_ready.go:94] pod "kube-controller-manager-addons-329194" is "Ready"
	I0908 13:49:35.331426  499972 pod_ready.go:86] duration metric: took 383.243308ms for pod "kube-controller-manager-addons-329194" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:35.531777  499972 pod_ready.go:83] waiting for pod "kube-proxy-bnskb" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:35.932144  499972 pod_ready.go:94] pod "kube-proxy-bnskb" is "Ready"
	I0908 13:49:35.932172  499972 pod_ready.go:86] duration metric: took 400.364251ms for pod "kube-proxy-bnskb" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:36.131288  499972 pod_ready.go:83] waiting for pod "kube-scheduler-addons-329194" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:36.531977  499972 pod_ready.go:94] pod "kube-scheduler-addons-329194" is "Ready"
	I0908 13:49:36.532009  499972 pod_ready.go:86] duration metric: took 400.688923ms for pod "kube-scheduler-addons-329194" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:49:36.532021  499972 pod_ready.go:40] duration metric: took 1.604734952s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:49:36.575270  499972 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 13:49:36.579019  499972 out.go:179] * Done! kubectl is now configured to use "addons-329194" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.189896465Z" level=info msg="Removing pod sandbox: a431fc2216181a527eeac87fe80eb9e42f7adfe91d91f34057e5aeeb342c7a5e" id=220f5c20-accc-4209-a1e2-5d409b080dc7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.196806107Z" level=info msg="Removed pod sandbox: a431fc2216181a527eeac87fe80eb9e42f7adfe91d91f34057e5aeeb342c7a5e" id=220f5c20-accc-4209-a1e2-5d409b080dc7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.197361244Z" level=info msg="Stopping pod sandbox: f3d754c04806aa25b0bf312a7fa60d61590d1e4745583eca080432b4fd204ebb" id=0349e8e9-2657-404a-8ec3-d49894040306 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.197405524Z" level=info msg="Stopped pod sandbox (already stopped): f3d754c04806aa25b0bf312a7fa60d61590d1e4745583eca080432b4fd204ebb" id=0349e8e9-2657-404a-8ec3-d49894040306 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.197788346Z" level=info msg="Removing pod sandbox: f3d754c04806aa25b0bf312a7fa60d61590d1e4745583eca080432b4fd204ebb" id=8c25fe42-cbd4-4afa-b8ca-6a69e54e5c2e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.205099473Z" level=info msg="Removed pod sandbox: f3d754c04806aa25b0bf312a7fa60d61590d1e4745583eca080432b4fd204ebb" id=8c25fe42-cbd4-4afa-b8ca-6a69e54e5c2e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.205586430Z" level=info msg="Stopping pod sandbox: a0ee32a7f312dcbb27e03fb59254fc82ec059704b535471162a9773adc431612" id=c8af2474-dd78-4b21-9bad-9d0e3a8b3e10 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.205627804Z" level=info msg="Stopped pod sandbox (already stopped): a0ee32a7f312dcbb27e03fb59254fc82ec059704b535471162a9773adc431612" id=c8af2474-dd78-4b21-9bad-9d0e3a8b3e10 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.205947690Z" level=info msg="Removing pod sandbox: a0ee32a7f312dcbb27e03fb59254fc82ec059704b535471162a9773adc431612" id=d2fb26f5-49a4-4d8b-8dc4-e87d1eceaa07 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.212370992Z" level=info msg="Removed pod sandbox: a0ee32a7f312dcbb27e03fb59254fc82ec059704b535471162a9773adc431612" id=d2fb26f5-49a4-4d8b-8dc4-e87d1eceaa07 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.212872712Z" level=info msg="Stopping pod sandbox: 7930c2b825bf4f2eb1b41f04129afa70f7e6325da9eb2bb6fa82200cda1ab910" id=31a5c95d-151d-4125-be3e-e358e8cf4a07 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.212901285Z" level=info msg="Stopped pod sandbox (already stopped): 7930c2b825bf4f2eb1b41f04129afa70f7e6325da9eb2bb6fa82200cda1ab910" id=31a5c95d-151d-4125-be3e-e358e8cf4a07 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.213217494Z" level=info msg="Removing pod sandbox: 7930c2b825bf4f2eb1b41f04129afa70f7e6325da9eb2bb6fa82200cda1ab910" id=67ef7872-1a56-4031-a9b9-47a43441c59d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:51:32 addons-329194 crio[1054]: time="2025-09-08 13:51:32.219889517Z" level=info msg="Removed pod sandbox: 7930c2b825bf4f2eb1b41f04129afa70f7e6325da9eb2bb6fa82200cda1ab910" id=67ef7872-1a56-4031-a9b9-47a43441c59d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:52:44 addons-329194 crio[1054]: time="2025-09-08 13:52:44.763833705Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-6mtgz/POD" id=0d64fa08-752b-474c-aa49-d78495f903c9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 13:52:44 addons-329194 crio[1054]: time="2025-09-08 13:52:44.763936744Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 13:52:44 addons-329194 crio[1054]: time="2025-09-08 13:52:44.787041141Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6mtgz Namespace:default ID:e8eb79402e56400e14fd588590171e437f579969f545dc1b4afd439dd3ec1123 UID:a22cdb6b-2cd1-4f0e-9bf4-989cfe1199c7 NetNS:/var/run/netns/637066e7-f573-4972-881a-269e2e4f778a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 13:52:44 addons-329194 crio[1054]: time="2025-09-08 13:52:44.787095636Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-6mtgz to CNI network \"kindnet\" (type=ptp)"
	Sep 08 13:52:44 addons-329194 crio[1054]: time="2025-09-08 13:52:44.797619899Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6mtgz Namespace:default ID:e8eb79402e56400e14fd588590171e437f579969f545dc1b4afd439dd3ec1123 UID:a22cdb6b-2cd1-4f0e-9bf4-989cfe1199c7 NetNS:/var/run/netns/637066e7-f573-4972-881a-269e2e4f778a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 13:52:44 addons-329194 crio[1054]: time="2025-09-08 13:52:44.797815919Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-6mtgz for CNI network kindnet (type=ptp)"
	Sep 08 13:52:44 addons-329194 crio[1054]: time="2025-09-08 13:52:44.800865223Z" level=info msg="Ran pod sandbox e8eb79402e56400e14fd588590171e437f579969f545dc1b4afd439dd3ec1123 with infra container: default/hello-world-app-5d498dc89-6mtgz/POD" id=0d64fa08-752b-474c-aa49-d78495f903c9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 13:52:44 addons-329194 crio[1054]: time="2025-09-08 13:52:44.801985970Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e462ec71-cf9f-4740-8c67-b746b2dc17c9 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:52:44 addons-329194 crio[1054]: time="2025-09-08 13:52:44.802193360Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e462ec71-cf9f-4740-8c67-b746b2dc17c9 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:52:44 addons-329194 crio[1054]: time="2025-09-08 13:52:44.802730895Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=0d54ae3c-c8a8-4412-9d9e-015248fe427c name=/runtime.v1.ImageService/PullImage
	Sep 08 13:52:44 addons-329194 crio[1054]: time="2025-09-08 13:52:44.806984032Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	58a4cb386ea60       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   4f8fdea175fe4       nginx
	f0edb9c808a6d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   3c6afd710f6b0       busybox
	a9526562eef17       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   174993b5e761b       ingress-nginx-controller-9cc49f96f-v74n6
	4523064ee4d75       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            3 minutes ago       Running             gadget                    0                   aad98fd6ecaa1       gadget-jjphw
	5dd33469d200d       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             3 minutes ago       Exited              patch                     1                   a13338ced24d4       ingress-nginx-admission-patch-z4g4l
	bc1318962cfdb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago       Exited              create                    0                   6dde3b0f32a6f       ingress-nginx-admission-create-ct9v4
	fb22e06d0909e       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   614d7b409a257       kube-ingress-dns-minikube
	8d742fa0a4216       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   82798d03a74fb       coredns-66bc5c9577-rsqn5
	1b98d8dfa5653       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   a7481ff56bfad       storage-provisioner
	153d2c4b1dc01       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             5 minutes ago       Running             kube-proxy                0                   326cae2ede540       kube-proxy-bnskb
	6b4c3a802ad6e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                             5 minutes ago       Running             kindnet-cni               0                   e33ed7d451bda       kindnet-vmdkv
	6bae7fe75868c       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago       Running             kube-scheduler            0                   dd7184b475dba       kube-scheduler-addons-329194
	6ad93fc5b84d9       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago       Running             kube-controller-manager   0                   013de6978a9a6       kube-controller-manager-addons-329194
	569001a5202f9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago       Running             kube-apiserver            0                   24337e3d7c5f9       kube-apiserver-addons-329194
	d4d2b0cfe0cac       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   dd59492abd8c0       etcd-addons-329194
	
	
	==> coredns [8d742fa0a4216d3a7e48f298db7e8c58796c21f2313ba95ebfda06a1d8c08084] <==
	[INFO] 10.244.0.16:53563 - 7814 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004751891s
	[INFO] 10.244.0.16:37192 - 44604 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004612942s
	[INFO] 10.244.0.16:37192 - 45068 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.019505586s
	[INFO] 10.244.0.16:50993 - 8269 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004835737s
	[INFO] 10.244.0.16:50993 - 8589 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007578266s
	[INFO] 10.244.0.16:60284 - 8853 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196088s
	[INFO] 10.244.0.16:60284 - 8348 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000234687s
	[INFO] 10.244.0.22:48241 - 19731 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000260922s
	[INFO] 10.244.0.22:46788 - 59080 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000378275s
	[INFO] 10.244.0.22:50889 - 47880 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135231s
	[INFO] 10.244.0.22:37447 - 43440 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159555s
	[INFO] 10.244.0.22:53223 - 42860 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111239s
	[INFO] 10.244.0.22:46144 - 20269 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000188026s
	[INFO] 10.244.0.22:60748 - 4703 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003450593s
	[INFO] 10.244.0.22:35097 - 56182 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004144914s
	[INFO] 10.244.0.22:47479 - 62784 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007131384s
	[INFO] 10.244.0.22:50464 - 50423 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007311256s
	[INFO] 10.244.0.22:40606 - 45363 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004832467s
	[INFO] 10.244.0.22:55062 - 42851 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00578042s
	[INFO] 10.244.0.22:50930 - 33424 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004946007s
	[INFO] 10.244.0.22:40373 - 33329 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006234595s
	[INFO] 10.244.0.22:48921 - 20840 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001022486s
	[INFO] 10.244.0.22:54844 - 29033 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002075703s
	[INFO] 10.244.0.25:35917 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000364886s
	[INFO] 10.244.0.25:49344 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000188101s
	
	
	==> describe nodes <==
	Name:               addons-329194
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-329194
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=addons-329194
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_47_32_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-329194
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:47:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-329194
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 13:52:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:50:35 +0000   Mon, 08 Sep 2025 13:47:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:50:35 +0000   Mon, 08 Sep 2025 13:47:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:50:35 +0000   Mon, 08 Sep 2025 13:47:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:50:35 +0000   Mon, 08 Sep 2025 13:48:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-329194
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c63cea54f22a477391b7977f563ca5b2
	  System UUID:                cbf4e48f-1a86-46d9-b793-1fec454faa02
	  Boot ID:                    d8938bde-5570-4c3e-82d1-cfb806dfa720
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     hello-world-app-5d498dc89-6mtgz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-jjphw                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-v74n6    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         5m4s
	  kube-system                 coredns-66bc5c9577-rsqn5                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m9s
	  kube-system                 etcd-addons-329194                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m15s
	  kube-system                 kindnet-vmdkv                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m10s
	  kube-system                 kube-apiserver-addons-329194                250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-controller-manager-addons-329194       200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-proxy-bnskb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-scheduler-addons-329194                100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m3s   kube-proxy       
	  Normal   Starting                 5m15s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m15s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m15s  kubelet          Node addons-329194 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m15s  kubelet          Node addons-329194 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m15s  kubelet          Node addons-329194 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m10s  node-controller  Node addons-329194 event: Registered Node addons-329194 in Controller
	  Normal   NodeReady                4m25s  kubelet          Node addons-329194 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 b3 0c 6b ed f7 08 06
	[  +0.000353] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 21 55 74 ed 2f 08 06
	[ +19.558646] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 9c ff fc 80 83 08 06
	[  +0.001186] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 55 17 2e eb 59 08 06
	[Sep 8 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be 81 9e 63 0c 43 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9e 55 17 2e eb 59 08 06
	[Sep 8 13:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[  +1.030999] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[  +2.015806] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[  +4.159558] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[  +8.191102] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[Sep 8 13:51] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[ +32.764548] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	
	
	==> etcd [d4d2b0cfe0cac21b052600454c401b723b041b4506fb930ae263615ac67de2d8] <==
	{"level":"info","ts":"2025-09-08T13:47:41.901506Z","caller":"traceutil/trace.go:172","msg":"trace[2060524719] range","detail":"{range_begin:/registry/serviceaccounts/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:480; }","duration":"101.188591ms","start":"2025-09-08T13:47:41.800301Z","end":"2025-09-08T13:47:41.901489Z","steps":["trace[2060524719] 'agreement among raft nodes before linearized reading'  (duration: 100.977793ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:47:41.901656Z","caller":"traceutil/trace.go:172","msg":"trace[71755595] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"106.42675ms","start":"2025-09-08T13:47:41.795210Z","end":"2025-09-08T13:47:41.901637Z","steps":["trace[71755595] 'process raft request'  (duration: 106.252189ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:47:41.901730Z","caller":"traceutil/trace.go:172","msg":"trace[364312198] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"100.832127ms","start":"2025-09-08T13:47:41.800889Z","end":"2025-09-08T13:47:41.901721Z","steps":["trace[364312198] 'process raft request'  (duration: 100.774082ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:47:41.901812Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.108817ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/ingress-nginx\" limit:1 ","response":"range_response_count:1 size:849"}
	{"level":"info","ts":"2025-09-08T13:47:41.902660Z","caller":"traceutil/trace.go:172","msg":"trace[76682762] range","detail":"{range_begin:/registry/namespaces/ingress-nginx; range_end:; response_count:1; response_revision:487; }","duration":"102.95986ms","start":"2025-09-08T13:47:41.799686Z","end":"2025-09-08T13:47:41.902646Z","steps":["trace[76682762] 'agreement among raft nodes before linearized reading'  (duration: 102.018229ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:47:41.901868Z","caller":"traceutil/trace.go:172","msg":"trace[924984226] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"101.819766ms","start":"2025-09-08T13:47:41.800040Z","end":"2025-09-08T13:47:41.901860Z","steps":["trace[924984226] 'process raft request'  (duration: 101.564148ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:47:41.901869Z","caller":"traceutil/trace.go:172","msg":"trace[1224196268] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"106.328243ms","start":"2025-09-08T13:47:41.795531Z","end":"2025-09-08T13:47:41.901859Z","steps":["trace[1224196268] 'process raft request'  (duration: 105.9764ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:47:41.901892Z","caller":"traceutil/trace.go:172","msg":"trace[336941976] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"101.520176ms","start":"2025-09-08T13:47:41.800367Z","end":"2025-09-08T13:47:41.901887Z","steps":["trace[336941976] 'process raft request'  (duration: 101.269678ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:47:41.902026Z","caller":"traceutil/trace.go:172","msg":"trace[746830411] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"102.161072ms","start":"2025-09-08T13:47:41.799855Z","end":"2025-09-08T13:47:41.902016Z","steps":["trace[746830411] 'process raft request'  (duration: 101.710846ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:47:41.902036Z","caller":"traceutil/trace.go:172","msg":"trace[1172675427] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"107.173517ms","start":"2025-09-08T13:47:41.794856Z","end":"2025-09-08T13:47:41.902029Z","steps":["trace[1172675427] 'process raft request'  (duration: 106.503907ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:47:44.779688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:47:44.786604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:48:06.240816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:48:06.247327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:48:06.289889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:48:06.298054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37860","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T13:48:56.944608Z","caller":"traceutil/trace.go:172","msg":"trace[970482354] transaction","detail":"{read_only:false; response_revision:1058; number_of_response:1; }","duration":"149.778189ms","start":"2025-09-08T13:48:56.794814Z","end":"2025-09-08T13:48:56.944592Z","steps":["trace[970482354] 'process raft request'  (duration: 149.628578ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:49:02.386661Z","caller":"traceutil/trace.go:172","msg":"trace[1630247567] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"129.632609ms","start":"2025-09-08T13:49:02.257007Z","end":"2025-09-08T13:49:02.386640Z","steps":["trace[1630247567] 'process raft request'  (duration: 129.188349ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:49:31.010864Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.623116ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128039831367424945 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:1203 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:65 lease:8128039831367424943 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-08T13:49:31.011200Z","caller":"traceutil/trace.go:172","msg":"trace[967524232] transaction","detail":"{read_only:false; response_revision:1232; number_of_response:1; }","duration":"186.993454ms","start":"2025-09-08T13:49:30.824164Z","end":"2025-09-08T13:49:31.011157Z","steps":["trace[967524232] 'process raft request'  (duration: 64.988076ms)","trace[967524232] 'compare'  (duration: 121.480507ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T13:49:47.042512Z","caller":"traceutil/trace.go:172","msg":"trace[1576574283] transaction","detail":"{read_only:false; response_revision:1313; number_of_response:1; }","duration":"125.903914ms","start":"2025-09-08T13:49:46.916585Z","end":"2025-09-08T13:49:47.042489Z","steps":["trace[1576574283] 'process raft request'  (duration: 62.150421ms)","trace[1576574283] 'compare'  (duration: 63.621148ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T13:50:21.286677Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.133114ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" limit:1 ","response":"range_response_count:1 size:621"}
	{"level":"info","ts":"2025-09-08T13:50:21.286865Z","caller":"traceutil/trace.go:172","msg":"trace[259246643] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:1623; }","duration":"157.338158ms","start":"2025-09-08T13:50:21.129510Z","end":"2025-09-08T13:50:21.286848Z","steps":["trace[259246643] 'agreement among raft nodes before linearized reading'  (duration: 60.086686ms)","trace[259246643] 'range keys from in-memory index tree'  (duration: 96.849245ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T13:50:21.286704Z","caller":"traceutil/trace.go:172","msg":"trace[1236881016] transaction","detail":"{read_only:false; response_revision:1624; number_of_response:1; }","duration":"194.637638ms","start":"2025-09-08T13:50:21.092046Z","end":"2025-09-08T13:50:21.286683Z","steps":["trace[1236881016] 'process raft request'  (duration: 97.58463ms)","trace[1236881016] 'compare'  (duration: 96.851794ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T13:50:21.286684Z","caller":"traceutil/trace.go:172","msg":"trace[884450013] transaction","detail":"{read_only:false; response_revision:1625; number_of_response:1; }","duration":"160.154347ms","start":"2025-09-08T13:50:21.126512Z","end":"2025-09-08T13:50:21.286667Z","steps":["trace[884450013] 'process raft request'  (duration: 160.084944ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:52:46 up  3:35,  0 users,  load average: 0.40, 1.22, 1.84
	Linux addons-329194 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6b4c3a802ad6ef5ff6ba73af92175e9dc4887e1f32f0e6b5698547e53949f799] <==
	I0908 13:50:40.998966       1 main.go:301] handling current node
	I0908 13:50:50.999624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:50:50.999686       1 main.go:301] handling current node
	I0908 13:51:00.999725       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:51:00.999771       1 main.go:301] handling current node
	I0908 13:51:11.000556       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:51:11.000590       1 main.go:301] handling current node
	I0908 13:51:20.999745       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:51:20.999776       1 main.go:301] handling current node
	I0908 13:51:31.004857       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:51:31.004894       1 main.go:301] handling current node
	I0908 13:51:41.001276       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:51:41.001310       1 main.go:301] handling current node
	I0908 13:51:51.001098       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:51:51.001138       1 main.go:301] handling current node
	I0908 13:52:01.004585       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:52:01.004635       1 main.go:301] handling current node
	I0908 13:52:11.004570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:52:11.004611       1 main.go:301] handling current node
	I0908 13:52:21.000570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:52:21.000609       1 main.go:301] handling current node
	I0908 13:52:31.004572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:52:31.004612       1 main.go:301] handling current node
	I0908 13:52:40.999131       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:52:40.999163       1 main.go:301] handling current node
	
	
	==> kube-apiserver [569001a5202f926e8d723f8a2e8a87e8660a9166d9ba51ef45842cb88cf6a8e4] <==
	E0908 13:49:46.270542       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51328: use of closed network connection
	E0908 13:49:46.438274       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51354: use of closed network connection
	I0908 13:49:55.537601       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.228.227"}
	I0908 13:50:12.706720       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:50:15.329161       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0908 13:50:18.057869       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0908 13:50:18.235012       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.91.47"}
	I0908 13:50:31.787712       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0908 13:50:37.099321       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0908 13:50:41.286196       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:50:59.759346       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:50:59.759402       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 13:50:59.774373       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:50:59.774417       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 13:50:59.802847       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:50:59.803012       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 13:50:59.811819       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:50:59.811941       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0908 13:51:00.789055       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0908 13:51:00.812488       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0908 13:51:00.888961       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0908 13:51:27.190545       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:51:47.332422       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:52:32.128130       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:52:44.590765       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.87.94"}
	
	
	==> kube-controller-manager [6ad93fc5b84d96e914389ecdcafe28fbce1ecf6fe60b37da081f7c27136a648c] <==
	E0908 13:51:09.586260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:51:09.790598       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:51:09.791594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:51:10.170172       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:51:10.171168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:51:17.399081       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:51:17.400262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:51:20.107135       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:51:20.108165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:51:20.714769       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:51:20.715832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:51:34.034211       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:51:34.035264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:51:37.128795       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:51:37.129821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:51:42.537459       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:51:42.538549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:52:06.730885       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:52:06.732076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:52:07.775953       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:52:07.776996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:52:11.016938       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:52:11.018001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:52:41.600292       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:52:41.601341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [153d2c4b1dc01a80ac11c9b034734de1cd9641b8d8eab4eaea36dea0c7d73b43] <==
	I0908 13:47:40.894722       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:47:41.890817       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:47:41.992008       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:47:41.992240       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 13:47:41.994126       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:47:42.200796       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:47:42.200942       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:47:42.210476       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:47:42.211035       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:47:42.211138       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:47:42.214461       1 config.go:309] "Starting node config controller"
	I0908 13:47:42.214566       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:47:42.214604       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:47:42.214665       1 config.go:200] "Starting service config controller"
	I0908 13:47:42.215776       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:47:42.215151       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:47:42.215832       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:47:42.215161       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:47:42.215845       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:47:42.316142       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:47:42.316185       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:47:42.316220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6bae7fe75868c4afe85a10bd92b1778c32407770f05af40b2838c034eb8f0b84] <==
	I0908 13:47:29.715389       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:47:29.717356       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:47:29.717393       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:47:29.717792       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:47:29.717848       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0908 13:47:29.718895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0908 13:47:29.719177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 13:47:29.720180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 13:47:29.720408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 13:47:29.720496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 13:47:29.720593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 13:47:29.720657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 13:47:29.720727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:47:29.720897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 13:47:29.720932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 13:47:29.720986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 13:47:29.721093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 13:47:29.721100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 13:47:29.721160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 13:47:29.721268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 13:47:29.721259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 13:47:29.721445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 13:47:29.721717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 13:47:29.721815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I0908 13:47:30.818340       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.838716    1686 container_manager_linux.go:562] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/2d117662d53410b4993153a8256a0f35b49c0628f5b7ec0b5b4707b6e1b98ca8, memory: /docker/2d117662d53410b4993153a8256a0f35b49c0628f5b7ec0b5b4707b6e1b98ca8/system.slice/kubelet.service"
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.846062    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/84c971f8581d3cd224b6249f15d9aa751c1bdb2339d71eb4b38110d5e54fd323/diff" to get inode usage: stat /var/lib/containers/storage/overlay/84c971f8581d3cd224b6249f15d9aa751c1bdb2339d71eb4b38110d5e54fd323/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.846071    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/84c971f8581d3cd224b6249f15d9aa751c1bdb2339d71eb4b38110d5e54fd323/diff" to get inode usage: stat /var/lib/containers/storage/overlay/84c971f8581d3cd224b6249f15d9aa751c1bdb2339d71eb4b38110d5e54fd323/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.890576    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f0fadedb0a6fdbb71e6e553cbfd39338fa6702eae999d759c15adf7af9850f16/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f0fadedb0a6fdbb71e6e553cbfd39338fa6702eae999d759c15adf7af9850f16/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.890608    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2591be5967089c15d858ebfcb95c9457f41685d5f7a8649e1c636122159aec55/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2591be5967089c15d858ebfcb95c9457f41685d5f7a8649e1c636122159aec55/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.890627    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2591be5967089c15d858ebfcb95c9457f41685d5f7a8649e1c636122159aec55/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2591be5967089c15d858ebfcb95c9457f41685d5f7a8649e1c636122159aec55/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.890647    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b36e9a43863486817c520c125b76364999cea2fce8ca13d208cdb0b9567989d1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b36e9a43863486817c520c125b76364999cea2fce8ca13d208cdb0b9567989d1/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.890651    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2808ae1e35b66af5b153178f24202d0fc968fc2663c3c811eef19f5d8a744ba5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2808ae1e35b66af5b153178f24202d0fc968fc2663c3c811eef19f5d8a744ba5/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.890662    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2808ae1e35b66af5b153178f24202d0fc968fc2663c3c811eef19f5d8a744ba5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2808ae1e35b66af5b153178f24202d0fc968fc2663c3c811eef19f5d8a744ba5/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.891899    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b868699cabaedd783573494d144e06a317483e87927959b7c19e941f03e42365/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b868699cabaedd783573494d144e06a317483e87927959b7c19e941f03e42365/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.891934    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fa525bd5a31e922a660fd0c66305e380438bc1b78546b966a2f97cd7c3bd5eb4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fa525bd5a31e922a660fd0c66305e380438bc1b78546b966a2f97cd7c3bd5eb4/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.891941    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fa525bd5a31e922a660fd0c66305e380438bc1b78546b966a2f97cd7c3bd5eb4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fa525bd5a31e922a660fd0c66305e380438bc1b78546b966a2f97cd7c3bd5eb4/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.891952    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f0fadedb0a6fdbb71e6e553cbfd39338fa6702eae999d759c15adf7af9850f16/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f0fadedb0a6fdbb71e6e553cbfd39338fa6702eae999d759c15adf7af9850f16/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.891965    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b36e9a43863486817c520c125b76364999cea2fce8ca13d208cdb0b9567989d1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b36e9a43863486817c520c125b76364999cea2fce8ca13d208cdb0b9567989d1/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.891977    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b868699cabaedd783573494d144e06a317483e87927959b7c19e941f03e42365/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b868699cabaedd783573494d144e06a317483e87927959b7c19e941f03e42365/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.892030    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9a2ea5ac78902bfe2dd55874b6185f29ccf6f2fcf839c5de95684485f6a4866b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9a2ea5ac78902bfe2dd55874b6185f29ccf6f2fcf839c5de95684485f6a4866b/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.892085    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9a2ea5ac78902bfe2dd55874b6185f29ccf6f2fcf839c5de95684485f6a4866b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9a2ea5ac78902bfe2dd55874b6185f29ccf6f2fcf839c5de95684485f6a4866b/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.924448    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/697d1af60ad4a8146a85f2f5ed737ebb93b720960811689eff1f567880a28aeb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/697d1af60ad4a8146a85f2f5ed737ebb93b720960811689eff1f567880a28aeb/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.926687    1686 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/697d1af60ad4a8146a85f2f5ed737ebb93b720960811689eff1f567880a28aeb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/697d1af60ad4a8146a85f2f5ed737ebb93b720960811689eff1f567880a28aeb/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.950836    1686 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339551950590546  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608710}  inodes_used:{value:230}}"
	Sep 08 13:52:31 addons-329194 kubelet[1686]: E0908 13:52:31.950872    1686 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339551950590546  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608710}  inodes_used:{value:230}}"
	Sep 08 13:52:41 addons-329194 kubelet[1686]: E0908 13:52:41.953216    1686 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339561952936756  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608710}  inodes_used:{value:230}}"
	Sep 08 13:52:41 addons-329194 kubelet[1686]: E0908 13:52:41.953253    1686 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339561952936756  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608710}  inodes_used:{value:230}}"
	Sep 08 13:52:44 addons-329194 kubelet[1686]: I0908 13:52:44.627771    1686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n66h4\" (UniqueName: \"kubernetes.io/projected/a22cdb6b-2cd1-4f0e-9bf4-989cfe1199c7-kube-api-access-n66h4\") pod \"hello-world-app-5d498dc89-6mtgz\" (UID: \"a22cdb6b-2cd1-4f0e-9bf4-989cfe1199c7\") " pod="default/hello-world-app-5d498dc89-6mtgz"
	Sep 08 13:52:44 addons-329194 kubelet[1686]: W0908 13:52:44.800184    1686 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2d117662d53410b4993153a8256a0f35b49c0628f5b7ec0b5b4707b6e1b98ca8/crio-e8eb79402e56400e14fd588590171e437f579969f545dc1b4afd439dd3ec1123 WatchSource:0}: Error finding container e8eb79402e56400e14fd588590171e437f579969f545dc1b4afd439dd3ec1123: Status 404 returned error can't find the container with id e8eb79402e56400e14fd588590171e437f579969f545dc1b4afd439dd3ec1123
	
	
	==> storage-provisioner [1b98d8dfa5653c6dbddf6ee9b7020aa81178f3d3eed0bc43464ae669d3ef3dd2] <==
	W0908 13:52:22.042196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:24.045555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:24.049550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:26.052210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:26.056227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:28.059403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:28.064478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:30.067109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:30.071291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:32.074424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:32.078659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:34.082056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:34.086156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:36.089048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:36.093437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:38.097000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:38.101044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:40.104187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:40.108643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:42.112451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:42.116706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:44.119716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:44.123838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:46.127380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:52:46.132543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-329194 -n addons-329194
helpers_test.go:269: (dbg) Run:  kubectl --context addons-329194 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-6mtgz ingress-nginx-admission-create-ct9v4 ingress-nginx-admission-patch-z4g4l
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-329194 describe pod hello-world-app-5d498dc89-6mtgz ingress-nginx-admission-create-ct9v4 ingress-nginx-admission-patch-z4g4l
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-329194 describe pod hello-world-app-5d498dc89-6mtgz ingress-nginx-admission-create-ct9v4 ingress-nginx-admission-patch-z4g4l: exit status 1 (66.882224ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-6mtgz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-329194/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:52:44 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n66h4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n66h4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-6mtgz to addons-329194
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ct9v4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z4g4l" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-329194 describe pod hello-world-app-5d498dc89-6mtgz ingress-nginx-admission-create-ct9v4 ingress-nginx-admission-patch-z4g4l: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-329194 addons disable ingress-dns --alsologtostderr -v=1: (1.35255799s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-329194 addons disable ingress --alsologtostderr -v=1: (7.65154526s)
--- FAIL: TestAddons/parallel/Ingress (158.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-746536 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-746536 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-w8wrb" [8b6f4b92-e479-46c4-95ad-23e44784c6d0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-746536 -n functional-746536
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-08 14:06:10.639964462 +0000 UTC m=+1172.081269167
functional_test.go:1645: (dbg) Run:  kubectl --context functional-746536 describe po hello-node-connect-7d85dfc575-w8wrb -n default
functional_test.go:1645: (dbg) kubectl --context functional-746536 describe po hello-node-connect-7d85dfc575-w8wrb -n default:
Name:             hello-node-connect-7d85dfc575-w8wrb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-746536/192.168.49.2
Start Time:       Mon, 08 Sep 2025 13:56:10 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qd25s (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qd25s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w8wrb to functional-746536
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m54s)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m2s (x5 over 9m54s)    kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-746536 logs hello-node-connect-7d85dfc575-w8wrb -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-746536 logs hello-node-connect-7d85dfc575-w8wrb -n default: exit status 1 (70.707855ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w8wrb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-746536 logs hello-node-connect-7d85dfc575-w8wrb -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-746536 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-w8wrb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-746536/192.168.49.2
Start Time:       Mon, 08 Sep 2025 13:56:10 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qd25s (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qd25s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w8wrb to functional-746536
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m54s)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m2s (x5 over 9m54s)    kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-746536 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-746536 logs -l app=hello-node-connect: exit status 1 (61.626252ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w8wrb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-746536 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-746536 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.241.55
IPs:                      10.109.241.55
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31267/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-746536
helpers_test.go:243: (dbg) docker inspect functional-746536:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838",
	        "Created": "2025-09-08T13:53:51.533099818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 524753,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:53:51.56260041Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838/hostname",
	        "HostsPath": "/var/lib/docker/containers/c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838/hosts",
	        "LogPath": "/var/lib/docker/containers/c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838/c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838-json.log",
	        "Name": "/functional-746536",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-746536:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-746536",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838",
	                "LowerDir": "/var/lib/docker/overlay2/99e5b7a292de66ad514a4f7e0a9d2ca8fc552662abe128ba2e4b9bf8472f18b9-init/diff:/var/lib/docker/overlay2/b93813c424f19944b84d6650258ee42fc88dbf4e092111f8eb9116f587feb593/diff",
	                "MergedDir": "/var/lib/docker/overlay2/99e5b7a292de66ad514a4f7e0a9d2ca8fc552662abe128ba2e4b9bf8472f18b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/99e5b7a292de66ad514a4f7e0a9d2ca8fc552662abe128ba2e4b9bf8472f18b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/99e5b7a292de66ad514a4f7e0a9d2ca8fc552662abe128ba2e4b9bf8472f18b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-746536",
	                "Source": "/var/lib/docker/volumes/functional-746536/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-746536",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-746536",
	                "name.minikube.sigs.k8s.io": "functional-746536",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a39e775432f81e7ecbc4c23bc20146595c0f627ab626c43930a5d0696afa739",
	            "SandboxKey": "/var/run/docker/netns/9a39e775432f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-746536": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:ea:9e:56:95:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba18dff56a2f045c5babc5960e3ca60ff57ec670d90331c0d9c23f3d2423f654",
	                    "EndpointID": "2f8e84160418eedb1ff4610acfb46d7557d133ba814b36dd7f7be33ef4cb3e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-746536",
	                        "c955699c78d8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-746536 -n functional-746536
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 logs -n 25: (1.450312509s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-746536 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ ssh            │ functional-746536 ssh -- ls -la /mount-9p                                                                          │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ ssh            │ functional-746536 ssh sudo umount -f /mount-9p                                                                     │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │                     │
	│ mount          │ -p functional-746536 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3736777664/001:/mount2 --alsologtostderr -v=1 │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │                     │
	│ ssh            │ functional-746536 ssh findmnt -T /mount1                                                                           │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │                     │
	│ mount          │ -p functional-746536 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3736777664/001:/mount3 --alsologtostderr -v=1 │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │                     │
	│ mount          │ -p functional-746536 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3736777664/001:/mount1 --alsologtostderr -v=1 │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │                     │
	│ ssh            │ functional-746536 ssh findmnt -T /mount1                                                                           │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ ssh            │ functional-746536 ssh findmnt -T /mount2                                                                           │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ ssh            │ functional-746536 ssh findmnt -T /mount3                                                                           │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ mount          │ -p functional-746536 --kill=true                                                                                   │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │                     │
	│ start          │ -p functional-746536 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │                     │
	│ start          │ -p functional-746536 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │                     │
	│ start          │ -p functional-746536 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-746536 --alsologtostderr -v=1                                                     │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ update-context │ functional-746536 update-context --alsologtostderr -v=2                                                            │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ update-context │ functional-746536 update-context --alsologtostderr -v=2                                                            │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ update-context │ functional-746536 update-context --alsologtostderr -v=2                                                            │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ image          │ functional-746536 image ls --format short --alsologtostderr                                                        │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ image          │ functional-746536 image ls --format yaml --alsologtostderr                                                         │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ ssh            │ functional-746536 ssh pgrep buildkitd                                                                              │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │                     │
	│ image          │ functional-746536 image build -t localhost/my-image:functional-746536 testdata/build --alsologtostderr             │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ image          │ functional-746536 image ls                                                                                         │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ image          │ functional-746536 image ls --format json --alsologtostderr                                                         │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	│ image          │ functional-746536 image ls --format table --alsologtostderr                                                        │ functional-746536 │ jenkins │ v1.36.0 │ 08 Sep 25 13:56 UTC │ 08 Sep 25 13:56 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:56:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:56:28.870232  541734 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:56:28.870469  541734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:28.870477  541734 out.go:374] Setting ErrFile to fd 2...
	I0908 13:56:28.870481  541734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:28.870675  541734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
	I0908 13:56:28.871239  541734 out.go:368] Setting JSON to false
	I0908 13:56:28.872264  541734 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13135,"bootTime":1757326654,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:56:28.872376  541734 start.go:140] virtualization: kvm guest
	I0908 13:56:28.874355  541734 out.go:179] * [functional-746536] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 13:56:28.875827  541734 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:56:28.875905  541734 notify.go:220] Checking for updates...
	I0908 13:56:28.878247  541734 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:56:28.879417  541734 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	I0908 13:56:28.880608  541734 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	I0908 13:56:28.881699  541734 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 13:56:28.882840  541734 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:56:28.884439  541734 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:56:28.884997  541734 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:56:28.908830  541734 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:56:28.908920  541734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:56:28.961714  541734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 13:56:28.951071429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:56:28.961820  541734 docker.go:318] overlay module found
	I0908 13:56:28.964489  541734 out.go:179] * Using the docker driver based on existing profile
	I0908 13:56:28.965658  541734 start.go:304] selected driver: docker
	I0908 13:56:28.965670  541734 start.go:918] validating driver "docker" against &{Name:functional-746536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-746536 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:56:28.965767  541734 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:56:28.965853  541734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:56:29.013442  541734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 13:56:29.004192021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:56:29.014367  541734 cni.go:84] Creating CNI manager for ""
	I0908 13:56:29.014445  541734 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:56:29.014517  541734 start.go:348] cluster config:
	{Name:functional-746536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-746536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:56:29.016186  541734 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 08 13:56:32 functional-746536 crio[5514]: time="2025-09-08 13:56:32.495340738Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 13:56:32 functional-746536 crio[5514]: time="2025-09-08 13:56:32.509180770Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9cd79d08170d35dc52b2b37c7f58f59a5e7a00db45f039e1efb7e77704af6aa1/merged/etc/group: no such file or directory"
	Sep 08 13:56:32 functional-746536 crio[5514]: time="2025-09-08 13:56:32.544708832Z" level=info msg="Created container 0a2981bac953b4b5a7f461a96f2397f0fbb507f3ef17e559c870a1413916475b: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-26f8h/dashboard-metrics-scraper" id=29f05e92-f282-4ea5-88ea-2de8f6a32cf3 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 13:56:32 functional-746536 crio[5514]: time="2025-09-08 13:56:32.545385443Z" level=info msg="Starting container: 0a2981bac953b4b5a7f461a96f2397f0fbb507f3ef17e559c870a1413916475b" id=2aed0112-efb6-4cea-a12d-92ee02e38af8 name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 13:56:32 functional-746536 crio[5514]: time="2025-09-08 13:56:32.551272042Z" level=info msg="Started container" PID=9632 containerID=0a2981bac953b4b5a7f461a96f2397f0fbb507f3ef17e559c870a1413916475b description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-26f8h/dashboard-metrics-scraper id=2aed0112-efb6-4cea-a12d-92ee02e38af8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce12449b723de3e583a171b38c11be6a8d94f7fe7decb62d08c39de455036558
	Sep 08 13:56:32 functional-746536 crio[5514]: time="2025-09-08 13:56:32.931090792Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 13:56:37 functional-746536 crio[5514]: time="2025-09-08 13:56:37.358954996Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6783ab02-04fb-49e7-803e-4c8840ffdab3 name=/runtime.v1.ImageService/PullImage
	Sep 08 13:56:37 functional-746536 crio[5514]: time="2025-09-08 13:56:37.359630307Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=59ee2c11-358e-4535-a768-415ea6ccd2b2 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:56:37 functional-746536 crio[5514]: time="2025-09-08 13:56:37.360455334Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029],Size_:249229937,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=59ee2c11-358e-4535-a768-415ea6ccd2b2 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:56:37 functional-746536 crio[5514]: time="2025-09-08 13:56:37.361453367Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=45d4b690-2486-4afa-914f-a9ce209a3041 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:56:37 functional-746536 crio[5514]: time="2025-09-08 13:56:37.362302149Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029],Size_:249229937,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=45d4b690-2486-4afa-914f-a9ce209a3041 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:56:37 functional-746536 crio[5514]: time="2025-09-08 13:56:37.365438276Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bqth4/kubernetes-dashboard" id=c47cea44-0777-44dd-ae90-ae384c2daed4 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 13:56:37 functional-746536 crio[5514]: time="2025-09-08 13:56:37.365575540Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 13:56:37 functional-746536 crio[5514]: time="2025-09-08 13:56:37.376681546Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/95dfdc404c69a7ed03fcc4e4e18e2b3d1b3b70701eb87165810608c3f81f4a24/merged/etc/group: no such file or directory"
	Sep 08 13:56:37 functional-746536 crio[5514]: time="2025-09-08 13:56:37.411831157Z" level=info msg="Created container 302e6aceed8b84eacb68de89432b4ce3d1614c003bc069f542d14abe88c3f5fb: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bqth4/kubernetes-dashboard" id=c47cea44-0777-44dd-ae90-ae384c2daed4 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 13:56:37 functional-746536 crio[5514]: time="2025-09-08 13:56:37.412629383Z" level=info msg="Starting container: 302e6aceed8b84eacb68de89432b4ce3d1614c003bc069f542d14abe88c3f5fb" id=80673409-e738-4055-977e-e023de51d763 name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 13:56:37 functional-746536 crio[5514]: time="2025-09-08 13:56:37.418801546Z" level=info msg="Started container" PID=9967 containerID=302e6aceed8b84eacb68de89432b4ce3d1614c003bc069f542d14abe88c3f5fb description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bqth4/kubernetes-dashboard id=80673409-e738-4055-977e-e023de51d763 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5f5fe1bfab7ae3a684efbd188368f793f79fbc28eed5f13b553eac664558145
	Sep 08 13:56:53 functional-746536 crio[5514]: time="2025-09-08 13:56:53.102179346Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5f23026b-e92b-4e93-a476-0288c4f6bde2 name=/runtime.v1.ImageService/PullImage
	Sep 08 13:56:56 functional-746536 crio[5514]: time="2025-09-08 13:56:56.102357722Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=256f0701-82ae-4e5b-b33e-e07e68341081 name=/runtime.v1.ImageService/PullImage
	Sep 08 13:57:34 functional-746536 crio[5514]: time="2025-09-08 13:57:34.102413359Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=39ac6084-9a43-4368-8997-56dfeb90c1ee name=/runtime.v1.ImageService/PullImage
	Sep 08 13:57:44 functional-746536 crio[5514]: time="2025-09-08 13:57:44.102983107Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=29f3179a-687e-4f55-a90e-e01e12645daa name=/runtime.v1.ImageService/PullImage
	Sep 08 13:59:08 functional-746536 crio[5514]: time="2025-09-08 13:59:08.102406538Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9f2dd65e-a659-4679-b390-3ce72a362027 name=/runtime.v1.ImageService/PullImage
	Sep 08 13:59:14 functional-746536 crio[5514]: time="2025-09-08 13:59:14.102699069Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cd8e2055-fdb8-43f7-afc7-2bfaa401caf8 name=/runtime.v1.ImageService/PullImage
	Sep 08 14:01:58 functional-746536 crio[5514]: time="2025-09-08 14:01:58.102419875Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=126683d0-6b5e-41a4-a2ad-8f77ab702821 name=/runtime.v1.ImageService/PullImage
	Sep 08 14:01:59 functional-746536 crio[5514]: time="2025-09-08 14:01:59.102756199Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=99ec0866-601d-4221-8c79-6578c9f0fa70 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	302e6aceed8b8       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 minutes ago       Running             kubernetes-dashboard        0                   b5f5fe1bfab7a       kubernetes-dashboard-855c9754f9-bqth4
	0a2981bac953b       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   ce12449b723de       dashboard-metrics-scraper-77bf4d6c4c-26f8h
	18bc7a6badb42       docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57                  9 minutes ago       Running             myfrontend                  0                   8e45b2d6fc8bf       sp-pod
	b23ab850fb98f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              9 minutes ago       Exited              mount-munger                0                   9144d357accc8       busybox-mount
	338a98941fcf5       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                  10 minutes ago      Running             nginx                       0                   2921c76ecf87e       nginx-svc
	973f82a751501       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  10 minutes ago      Running             mysql                       0                   3ebeace28e4ce       mysql-5bb876957f-85tcz
	42bd33f7c3d5b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     3                   02b2def2aa4f7       coredns-66bc5c9577-588xz
	469577f35613f       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 10 minutes ago      Running             kube-proxy                  3                   d0d47b8d8c774       kube-proxy-8fxxs
	3bfefc55baee7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   6135dc32364d9       storage-provisioner
	dc0671a2e725a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 3                   a764acf66563f       kindnet-7j9mg
	cf176129cf3cd       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 10 minutes ago      Running             kube-apiserver              0                   6422adca3ae7c       kube-apiserver-functional-746536
	7e2eb8b7a0334       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 10 minutes ago      Running             kube-scheduler              3                   aa848398f5451       kube-scheduler-functional-746536
	6e40e9409339f       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 10 minutes ago      Running             kube-controller-manager     3                   6e31fb39e86f7       kube-controller-manager-functional-746536
	cda415f3f7e35       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        3                   d28d2f2dc0841       etcd-functional-746536
	de20c9dabbdef       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     2                   02b2def2aa4f7       coredns-66bc5c9577-588xz
	4fb10a9086139       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 2                   a764acf66563f       kindnet-7j9mg
	aa414333fe19d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 11 minutes ago      Exited              kube-proxy                  2                   d0d47b8d8c774       kube-proxy-8fxxs
	0ad416ef23979       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 11 minutes ago      Exited              kube-scheduler              2                   aa848398f5451       kube-scheduler-functional-746536
	a30205ee0612d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         2                   6135dc32364d9       storage-provisioner
	eb3be6619a46d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 11 minutes ago      Exited              kube-controller-manager     2                   6e31fb39e86f7       kube-controller-manager-functional-746536
	584f0be956da3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        2                   d28d2f2dc0841       etcd-functional-746536
	
	
	==> coredns [42bd33f7c3d5ba9234e577868f6f55d7303a95ca9e8e765a607c7194cbc58e7a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53928 - 41102 "HINFO IN 3294239926206291581.7971854944636994300. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.473400675s
	
	
	==> coredns [de20c9dabbdef167f9ecb86eaeedd3f085563d25be9bffc5ddad04a084fd5a96] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53795 - 31019 "HINFO IN 1939479175601511338.5768945010931073566. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.08796835s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-746536
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-746536
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=functional-746536
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_54_07_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:54:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-746536
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 14:06:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 14:06:07 +0000   Mon, 08 Sep 2025 13:54:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 14:06:07 +0000   Mon, 08 Sep 2025 13:54:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 14:06:07 +0000   Mon, 08 Sep 2025 13:54:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 14:06:07 +0000   Mon, 08 Sep 2025 13:54:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-746536
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d23bef42c2743a3b08e5fe9b388086f
	  System UUID:                f7fb54fa-cd6e-4b16-a3ce-eaad397ff314
	  Boot ID:                    d8938bde-5570-4c3e-82d1-cfb806dfa720
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-fb6fv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  default                     hello-node-connect-7d85dfc575-w8wrb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-85tcz                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  kube-system                 coredns-66bc5c9577-588xz                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-746536                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-7j9mg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-746536              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-746536     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-8fxxs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-746536              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-26f8h    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bqth4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-746536 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-746536 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-746536 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-746536 event: Registered Node functional-746536 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-746536 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-746536 event: Registered Node functional-746536 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-746536 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-746536 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-746536 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-746536 event: Registered Node functional-746536 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 b3 0c 6b ed f7 08 06
	[  +0.000353] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 21 55 74 ed 2f 08 06
	[ +19.558646] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 9c ff fc 80 83 08 06
	[  +0.001186] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 55 17 2e eb 59 08 06
	[Sep 8 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be 81 9e 63 0c 43 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9e 55 17 2e eb 59 08 06
	[Sep 8 13:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[  +1.030999] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[  +2.015806] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[  +4.159558] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[  +8.191102] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[Sep 8 13:51] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	[ +32.764548] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 3a 7c 52 14 38 6b 7e 4e f3 ab da 69 08 00
	
	
	==> etcd [584f0be956da3f9d04456f344fcdaf3a05f89f87bcf494b368b6e5f9399d26ba] <==
	{"level":"warn","ts":"2025-09-08T13:54:52.411108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:54:52.423673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:54:52.430077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:54:52.456451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:54:52.462955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:54:52.469187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:54:52.520726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51114","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T13:55:17.046581Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T13:55:17.046687Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-746536","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-08T13:55:17.046787Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T13:55:17.184752Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T13:55:17.186296Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T13:55:17.186342Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-08T13:55:17.186371Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T13:55:17.186406Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-08T13:55:17.186411Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-08T13:55:17.186413Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-09-08T13:55:17.186417Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-08T13:55:17.186377Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T13:55:17.186447Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T13:55:17.186454Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T13:55:17.188890Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-08T13:55:17.188954Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T13:55:17.188975Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-08T13:55:17.188981Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-746536","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [cda415f3f7e35e61a8acca4d08cbb0b2cf3d4911ffdd405e31efb885c346d4cd] <==
	{"level":"warn","ts":"2025-09-08T13:55:32.634716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.695621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.703108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.709346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.716935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.748735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.792108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.799037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.807067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.815429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.826216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.829810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.836366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.896321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.903820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.911416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.918840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.952254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:32.996522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:33.005260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:55:33.105803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47124","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T13:56:42.233218Z","caller":"traceutil/trace.go:172","msg":"trace[817279026] transaction","detail":"{read_only:false; response_revision:954; number_of_response:1; }","duration":"125.659317ms","start":"2025-09-08T13:56:42.107536Z","end":"2025-09-08T13:56:42.233195Z","steps":["trace[817279026] 'process raft request'  (duration: 122.201058ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T14:05:31.947143Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1230}
	{"level":"info","ts":"2025-09-08T14:05:31.967675Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1230,"took":"20.101221ms","hash":1254105001,"current-db-size-bytes":3678208,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1683456,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-09-08T14:05:31.967738Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1254105001,"revision":1230,"compact-revision":-1}
	
	
	==> kernel <==
	 14:06:12 up  3:48,  0 users,  load average: 0.10, 0.23, 0.92
	Linux functional-746536 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4fb10a9086139f986ab1a9d305d32dc7b18aafc2c556d9a58a0eef34de09de41] <==
	I0908 13:55:01.095895       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 13:55:01.188767       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0908 13:55:01.188974       1 main.go:148] setting mtu 1500 for CNI 
	I0908 13:55:01.188992       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 13:55:01.189019       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T13:55:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 13:55:01.488704       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 13:55:01.489242       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 13:55:01.489271       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 13:55:01.489538       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0908 13:55:01.789407       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 13:55:01.789442       1 metrics.go:72] Registering metrics
	I0908 13:55:01.789497       1 controller.go:711] "Syncing nftables rules"
	I0908 13:55:11.397073       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:55:11.397132       1 main.go:301] handling current node
	
	
	==> kindnet [dc0671a2e725adb688c164568898133809762942c216011e73f9701adbe92e0f] <==
	I0908 14:04:05.200618       1 main.go:301] handling current node
	I0908 14:04:15.194021       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:04:15.194063       1 main.go:301] handling current node
	I0908 14:04:25.192624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:04:25.192662       1 main.go:301] handling current node
	I0908 14:04:35.192109       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:04:35.192170       1 main.go:301] handling current node
	I0908 14:04:45.193353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:04:45.193390       1 main.go:301] handling current node
	I0908 14:04:55.192708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:04:55.192769       1 main.go:301] handling current node
	I0908 14:05:05.193395       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:05:05.193807       1 main.go:301] handling current node
	I0908 14:05:15.192898       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:05:15.192933       1 main.go:301] handling current node
	I0908 14:05:25.195147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:05:25.195183       1 main.go:301] handling current node
	I0908 14:05:35.201245       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:05:35.201288       1 main.go:301] handling current node
	I0908 14:05:45.194149       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:05:45.194192       1 main.go:301] handling current node
	I0908 14:05:55.194335       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:05:55.194438       1 main.go:301] handling current node
	I0908 14:06:05.201472       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:06:05.201526       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cf176129cf3cd31496042b91d57fd04b870bb5e4f0d92691e8f7d8fc87a6d424] <==
	E0908 13:56:17.079868       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49276: use of closed network connection
	I0908 13:56:17.207889       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.236.44"}
	E0908 13:56:22.351741       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49356: use of closed network connection
	I0908 13:56:29.862523       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 13:56:30.109339       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.103.160"}
	I0908 13:56:30.124032       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.50.169"}
	E0908 13:56:30.770473       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55240: use of closed network connection
	I0908 13:56:34.083234       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:56:52.484060       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:57:46.814360       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:57:55.140796       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:59:01.106639       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:59:08.876054       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:00:16.365816       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:00:18.794220       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:01:25.895078       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:01:30.598984       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:02:34.896251       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:02:42.483183       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:03:53.428522       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:04:02.716904       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:04:55.994038       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:05:07.499265       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:05:34.001178       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 14:05:58.474673       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [6e40e9409339faaf8bbcb82aa520373dd6914a2eb1e0a7b6e032838051fe56c6] <==
	I0908 13:55:37.316627       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 13:55:37.326906       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0908 13:55:37.329141       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 13:55:37.330383       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 13:55:37.332685       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 13:55:37.332722       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 13:55:37.332754       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0908 13:55:37.332983       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 13:55:37.333014       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 13:55:37.333088       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:55:37.333104       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 13:55:37.333110       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 13:55:37.333167       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0908 13:55:37.333388       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 13:55:37.333453       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 13:55:37.333953       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0908 13:55:37.334571       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 13:55:37.336633       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:55:37.354594       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0908 13:56:29.921302       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 13:56:29.989047       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 13:56:29.993079       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 13:56:29.993517       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 13:56:29.996656       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 13:56:30.003199       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [eb3be6619a46d06d84d0b723b1aa0b2af1510f47108b5d32d90eac6300c249fb] <==
	I0908 13:54:55.730043       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 13:54:55.743069       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 13:54:55.745821       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 13:54:55.748050       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 13:54:55.750298       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 13:54:55.751460       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 13:54:55.753705       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 13:54:55.763974       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0908 13:54:55.764043       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0908 13:54:55.764074       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 13:54:55.764081       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0908 13:54:55.764086       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0908 13:54:55.766458       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 13:54:55.772896       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 13:54:55.772928       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0908 13:54:55.772947       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 13:54:55.772964       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 13:54:55.772967       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 13:54:55.772998       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 13:54:55.773012       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 13:54:55.773018       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0908 13:54:55.777776       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:54:55.780010       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:54:55.791252       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:54:55.793313       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [469577f35613f08ef526aea0b08adca5cfc06ca54dbcf15fc2eba9e2c6c47b14] <==
	I0908 13:55:34.891505       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:55:35.014302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:55:35.114888       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:55:35.114934       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 13:55:35.115027       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:55:35.140419       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:55:35.140525       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:55:35.146099       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:55:35.146601       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:55:35.146638       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:55:35.149908       1 config.go:200] "Starting service config controller"
	I0908 13:55:35.149933       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:55:35.149954       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:55:35.149967       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:55:35.149991       1 config.go:309] "Starting node config controller"
	I0908 13:55:35.149997       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:55:35.149988       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:55:35.150165       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:55:35.250137       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:55:35.250149       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:55:35.250175       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:55:35.250287       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [aa414333fe19dd92936a8195cda70e6b048326a88e3c9be39bdfd2d163d43aee] <==
	I0908 13:55:00.106460       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:55:00.209826       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:55:00.310438       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:55:00.310495       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 13:55:00.310607       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:55:00.334457       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:55:00.334524       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:55:00.339574       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:55:00.339982       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:55:00.340024       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:55:00.341285       1 config.go:200] "Starting service config controller"
	I0908 13:55:00.341305       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:55:00.341311       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:55:00.341327       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:55:00.341414       1 config.go:309] "Starting node config controller"
	I0908 13:55:00.341495       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:55:00.341528       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:55:00.341433       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:55:00.341560       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:55:00.442426       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:55:00.442444       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:55:00.442488       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0ad416ef23979a2d306dadcaf3f8300c6c7a9f98c254e93efd1b1415db44ae0a] <==
	I0908 13:54:57.790449       1 serving.go:386] Generated self-signed cert in-memory
	I0908 13:54:59.017725       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:54:59.017752       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:54:59.021929       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:54:59.021933       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 13:54:59.021965       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:54:59.021979       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 13:54:59.021987       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:54:59.021962       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:54:59.022276       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:54:59.022299       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:54:59.122246       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:54:59.122290       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 13:54:59.122240       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:55:17.047201       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0908 13:55:17.047363       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0908 13:55:17.047391       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0908 13:55:17.047424       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:55:17.047466       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0908 13:55:17.047500       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:55:17.047749       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0908 13:55:17.047930       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7e2eb8b7a03346989341d109bb0151d51c9a4c3e6c78b991291d406f627485eb] <==
	I0908 13:55:32.021874       1 serving.go:386] Generated self-signed cert in-memory
	W0908 13:55:33.908928       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 13:55:33.908966       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 13:55:33.908978       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 13:55:33.908988       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 13:55:34.013559       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:55:34.013590       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:55:34.015398       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:55:34.015446       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:55:34.015790       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:55:34.015821       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:55:34.188847       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 14:05:30 functional-746536 kubelet[5884]: E0908 14:05:30.312199    5884 manager.go:1116] Failed to create existing container: /crio-02b2def2aa4f79e55944f66daddf51b4d6b9a3c7cf34f954c2178107d9540480: Error finding container 02b2def2aa4f79e55944f66daddf51b4d6b9a3c7cf34f954c2178107d9540480: Status 404 returned error can't find the container with id 02b2def2aa4f79e55944f66daddf51b4d6b9a3c7cf34f954c2178107d9540480
	Sep 08 14:05:30 functional-746536 kubelet[5884]: E0908 14:05:30.312357    5884 manager.go:1116] Failed to create existing container: /crio-6e31fb39e86f772e85ad0f784bc5f016efbf26c9d10f18bf01dc55211fd03db4: Error finding container 6e31fb39e86f772e85ad0f784bc5f016efbf26c9d10f18bf01dc55211fd03db4: Status 404 returned error can't find the container with id 6e31fb39e86f772e85ad0f784bc5f016efbf26c9d10f18bf01dc55211fd03db4
	Sep 08 14:05:30 functional-746536 kubelet[5884]: E0908 14:05:30.312536    5884 manager.go:1116] Failed to create existing container: /docker/c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838/crio-02b2def2aa4f79e55944f66daddf51b4d6b9a3c7cf34f954c2178107d9540480: Error finding container 02b2def2aa4f79e55944f66daddf51b4d6b9a3c7cf34f954c2178107d9540480: Status 404 returned error can't find the container with id 02b2def2aa4f79e55944f66daddf51b4d6b9a3c7cf34f954c2178107d9540480
	Sep 08 14:05:30 functional-746536 kubelet[5884]: E0908 14:05:30.312748    5884 manager.go:1116] Failed to create existing container: /docker/c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838/crio-6135dc32364d974ddbb68fe240f698d17962ad3b9f69c8e6b8d5951a4b89c188: Error finding container 6135dc32364d974ddbb68fe240f698d17962ad3b9f69c8e6b8d5951a4b89c188: Status 404 returned error can't find the container with id 6135dc32364d974ddbb68fe240f698d17962ad3b9f69c8e6b8d5951a4b89c188
	Sep 08 14:05:30 functional-746536 kubelet[5884]: E0908 14:05:30.312902    5884 manager.go:1116] Failed to create existing container: /crio-d28d2f2dc0841f226bdbc5229bf41d57a6db028c0af7fd87d8bcc476e34adf61: Error finding container d28d2f2dc0841f226bdbc5229bf41d57a6db028c0af7fd87d8bcc476e34adf61: Status 404 returned error can't find the container with id d28d2f2dc0841f226bdbc5229bf41d57a6db028c0af7fd87d8bcc476e34adf61
	Sep 08 14:05:30 functional-746536 kubelet[5884]: E0908 14:05:30.313049    5884 manager.go:1116] Failed to create existing container: /crio-a764acf66563fa2ecb192aa0e372f22fd00dca4754e8cd0036aea75b22216a2d: Error finding container a764acf66563fa2ecb192aa0e372f22fd00dca4754e8cd0036aea75b22216a2d: Status 404 returned error can't find the container with id a764acf66563fa2ecb192aa0e372f22fd00dca4754e8cd0036aea75b22216a2d
	Sep 08 14:05:30 functional-746536 kubelet[5884]: E0908 14:05:30.313216    5884 manager.go:1116] Failed to create existing container: /docker/c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838/crio-d0d47b8d8c774cc21fe26b86ddfc6450b53bd631423e1f8bea49346ef5771280: Error finding container d0d47b8d8c774cc21fe26b86ddfc6450b53bd631423e1f8bea49346ef5771280: Status 404 returned error can't find the container with id d0d47b8d8c774cc21fe26b86ddfc6450b53bd631423e1f8bea49346ef5771280
	Sep 08 14:05:30 functional-746536 kubelet[5884]: E0908 14:05:30.313372    5884 manager.go:1116] Failed to create existing container: /docker/c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838/crio-6e31fb39e86f772e85ad0f784bc5f016efbf26c9d10f18bf01dc55211fd03db4: Error finding container 6e31fb39e86f772e85ad0f784bc5f016efbf26c9d10f18bf01dc55211fd03db4: Status 404 returned error can't find the container with id 6e31fb39e86f772e85ad0f784bc5f016efbf26c9d10f18bf01dc55211fd03db4
	Sep 08 14:05:30 functional-746536 kubelet[5884]: E0908 14:05:30.313522    5884 manager.go:1116] Failed to create existing container: /docker/c955699c78d80c42c03bcb0d410bcf1f4b379fe7beed549933f3057b52c43838/crio-186f204fbabb6bbd0795af0fd1e0789f376392a2250323f213418bb84d3b0b40: Error finding container 186f204fbabb6bbd0795af0fd1e0789f376392a2250323f213418bb84d3b0b40: Status 404 returned error can't find the container with id 186f204fbabb6bbd0795af0fd1e0789f376392a2250323f213418bb84d3b0b40
	Sep 08 14:05:30 functional-746536 kubelet[5884]: E0908 14:05:30.402469    5884 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757340330402290083  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 14:05:30 functional-746536 kubelet[5884]: E0908 14:05:30.402502    5884 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757340330402290083  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 14:05:38 functional-746536 kubelet[5884]: E0908 14:05:38.102533    5884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-w8wrb" podUID="8b6f4b92-e479-46c4-95ad-23e44784c6d0"
	Sep 08 14:05:40 functional-746536 kubelet[5884]: E0908 14:05:40.403842    5884 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757340340403623787  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 14:05:40 functional-746536 kubelet[5884]: E0908 14:05:40.403882    5884 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757340340403623787  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 14:05:43 functional-746536 kubelet[5884]: E0908 14:05:43.101792    5884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-fb6fv" podUID="7feddde0-91db-4203-9ffe-4f682ace4e41"
	Sep 08 14:05:50 functional-746536 kubelet[5884]: E0908 14:05:50.405398    5884 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757340350405182939  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 14:05:50 functional-746536 kubelet[5884]: E0908 14:05:50.405442    5884 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757340350405182939  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 14:05:51 functional-746536 kubelet[5884]: E0908 14:05:51.101409    5884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-w8wrb" podUID="8b6f4b92-e479-46c4-95ad-23e44784c6d0"
	Sep 08 14:05:56 functional-746536 kubelet[5884]: E0908 14:05:56.101631    5884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-fb6fv" podUID="7feddde0-91db-4203-9ffe-4f682ace4e41"
	Sep 08 14:06:00 functional-746536 kubelet[5884]: E0908 14:06:00.407051    5884 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757340360406844101  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 14:06:00 functional-746536 kubelet[5884]: E0908 14:06:00.407089    5884 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757340360406844101  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 14:06:05 functional-746536 kubelet[5884]: E0908 14:06:05.102006    5884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-w8wrb" podUID="8b6f4b92-e479-46c4-95ad-23e44784c6d0"
	Sep 08 14:06:10 functional-746536 kubelet[5884]: E0908 14:06:10.102131    5884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-fb6fv" podUID="7feddde0-91db-4203-9ffe-4f682ace4e41"
	Sep 08 14:06:10 functional-746536 kubelet[5884]: E0908 14:06:10.408775    5884 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757340370408457300  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	Sep 08 14:06:10 functional-746536 kubelet[5884]: E0908 14:06:10.408814    5884 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757340370408457300  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303429}  inodes_used:{value:134}}"
	
	
	==> kubernetes-dashboard [302e6aceed8b84eacb68de89432b4ce3d1614c003bc069f542d14abe88c3f5fb] <==
	2025/09/08 13:56:37 Starting overwatch
	2025/09/08 13:56:37 Using namespace: kubernetes-dashboard
	2025/09/08 13:56:37 Using in-cluster config to connect to apiserver
	2025/09/08 13:56:37 Using secret token for csrf signing
	2025/09/08 13:56:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/08 13:56:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/08 13:56:37 Successful initial request to the apiserver, version: v1.34.0
	2025/09/08 13:56:37 Generating JWE encryption key
	2025/09/08 13:56:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/08 13:56:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/08 13:56:37 Initializing JWE encryption key from synchronized object
	2025/09/08 13:56:37 Creating in-cluster Sidecar client
	2025/09/08 13:56:37 Successful request to sidecar
	2025/09/08 13:56:37 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [3bfefc55baee76d3dfcec939c94cf1e5e5904a52eea30d11b4b04fc0c953d411] <==
	W0908 14:05:46.677160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:48.681110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:48.685005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:50.687683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:50.691960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:52.695212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:52.700667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:54.703635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:54.707641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:56.711347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:56.716752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:58.719792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:05:58.723832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:00.726638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:00.731765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:02.734673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:02.738781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:04.741716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:04.746081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:06.748932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:06.753016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:08.757385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:08.762327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:10.764986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:06:10.772372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a30205ee0612da2c828149df3c22328462af1cad06d4bcb34261c43319d225eb] <==
	I0908 13:54:51.080284       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 13:54:51.082047       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-746536 -n functional-746536
helpers_test.go:269: (dbg) Run:  kubectl --context functional-746536 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-fb6fv hello-node-connect-7d85dfc575-w8wrb
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-746536 describe pod busybox-mount hello-node-75c85bcc94-fb6fv hello-node-connect-7d85dfc575-w8wrb
helpers_test.go:290: (dbg) kubectl --context functional-746536 describe pod busybox-mount hello-node-75c85bcc94-fb6fv hello-node-connect-7d85dfc575-w8wrb:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-746536/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:56:20 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b23ab850fb98f09ab5789cae2d8170d543b60ebd7a03481fe4fba77bea7587d0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 13:56:22 +0000
	      Finished:     Mon, 08 Sep 2025 13:56:22 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ph729 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-ph729:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m52s  default-scheduler  Successfully assigned default/busybox-mount to functional-746536
	  Normal  Pulling    9m53s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m51s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.245s (1.245s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m51s  kubelet            Created container: mount-munger
	  Normal  Started    9m51s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-fb6fv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-746536/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:56:17 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bmlkx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bmlkx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m56s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fb6fv to functional-746536
	  Normal   Pulling    6m59s (x5 over 9m56s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m59s (x5 over 9m56s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m59s (x5 over 9m56s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m50s (x21 over 9m56s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m50s (x21 over 9m56s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-w8wrb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-746536/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:56:10 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qd25s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qd25s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w8wrb to functional-746536
	  Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m5s (x5 over 9m57s)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m5s (x5 over 9m57s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m54s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-746536 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-746536 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-fb6fv" [7feddde0-91db-4203-9ffe-4f682ace4e41] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-746536 -n functional-746536
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-08 14:06:17.507574428 +0000 UTC m=+1178.948879128
functional_test.go:1460: (dbg) Run:  kubectl --context functional-746536 describe po hello-node-75c85bcc94-fb6fv -n default
functional_test.go:1460: (dbg) kubectl --context functional-746536 describe po hello-node-75c85bcc94-fb6fv -n default:
Name:             hello-node-75c85bcc94-fb6fv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-746536/192.168.49.2
Start Time:       Mon, 08 Sep 2025 13:56:17 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bmlkx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bmlkx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fb6fv to functional-746536
Normal   Pulling    7m3s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m3s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-746536 logs hello-node-75c85bcc94-fb6fv -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-746536 logs hello-node-75c85bcc94-fb6fv -n default: exit status 1 (61.872786ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-fb6fv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-746536 logs hello-node-75c85bcc94-fb6fv -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 service --namespace=default --https --url hello-node: exit status 115 (523.244137ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31683
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-746536 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 service hello-node --url --format={{.IP}}: exit status 115 (522.229303ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-746536 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 service hello-node --url: exit status 115 (521.402348ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31683
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-746536 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31683
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    

Test pass (299/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.26
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 5.16
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.22
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.13
21 TestBinaryMirror 0.8
22 TestOffline 95.45
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 164.48
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.46
35 TestAddons/parallel/Registry 17.61
36 TestAddons/parallel/RegistryCreds 0.83
38 TestAddons/parallel/InspektorGadget 5.37
39 TestAddons/parallel/MetricsServer 5.77
41 TestAddons/parallel/CSI 66.29
42 TestAddons/parallel/Headlamp 28.17
43 TestAddons/parallel/CloudSpanner 5.5
44 TestAddons/parallel/LocalPath 57.46
45 TestAddons/parallel/NvidiaDevicePlugin 5.48
46 TestAddons/parallel/Yakd 10.65
47 TestAddons/parallel/AmdGpuDevicePlugin 6.46
48 TestAddons/StoppedEnableDisable 12.08
49 TestCertOptions 31.81
50 TestCertExpiration 230.65
52 TestForceSystemdFlag 26.77
53 TestForceSystemdEnv 25.52
55 TestKVMDriverInstallOrUpdate 2.67
59 TestErrorSpam/setup 22.51
60 TestErrorSpam/start 0.59
61 TestErrorSpam/status 0.88
62 TestErrorSpam/pause 1.52
63 TestErrorSpam/unpause 1.69
64 TestErrorSpam/stop 1.36
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 41.98
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 40.26
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.28
76 TestFunctional/serial/CacheCmd/cache/add_local 1.36
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 33.97
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.39
87 TestFunctional/serial/LogsFileCmd 1.4
88 TestFunctional/serial/InvalidService 3.95
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 10.58
92 TestFunctional/parallel/DryRun 0.34
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.88
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 31.47
102 TestFunctional/parallel/SSHCmd 0.74
103 TestFunctional/parallel/CpCmd 1.72
104 TestFunctional/parallel/MySQL 20.14
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.76
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
114 TestFunctional/parallel/License 0.43
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.47
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.09
122 TestFunctional/parallel/ImageCommands/Setup 1.04
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.73
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.4
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.59
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.4
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.08
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
146 TestFunctional/parallel/ProfileCmd/profile_list 0.37
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
148 TestFunctional/parallel/MountCmd/any-port 5.39
149 TestFunctional/parallel/MountCmd/specific-port 1.47
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
151 TestFunctional/parallel/ServiceCmd/List 1.69
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 197.19
164 TestMultiControlPlane/serial/DeployApp 4.9
165 TestMultiControlPlane/serial/PingHostFromPods 1.12
166 TestMultiControlPlane/serial/AddWorkerNode 24.8
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
169 TestMultiControlPlane/serial/CopyFile 16.31
170 TestMultiControlPlane/serial/StopSecondaryNode 12.54
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
172 TestMultiControlPlane/serial/RestartSecondaryNode 28.53
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 114.6
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.37
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
177 TestMultiControlPlane/serial/StopCluster 25.1
178 TestMultiControlPlane/serial/RestartCluster 58.69
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
180 TestMultiControlPlane/serial/AddSecondaryNode 67.12
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
185 TestJSONOutput/start/Command 72.67
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.68
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.62
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.78
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 29.37
211 TestKicCustomNetwork/use_default_bridge_network 25.44
212 TestKicExistingNetwork 26.32
213 TestKicCustomSubnet 25.2
214 TestKicStaticIP 28.79
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 50.86
219 TestMountStart/serial/StartWithMountFirst 5.55
220 TestMountStart/serial/VerifyMountFirst 0.25
221 TestMountStart/serial/StartWithMountSecond 5.59
222 TestMountStart/serial/VerifyMountSecond 0.24
223 TestMountStart/serial/DeleteFirst 1.59
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.18
226 TestMountStart/serial/RestartStopped 7.11
227 TestMountStart/serial/VerifyMountPostStop 0.25
230 TestMultiNode/serial/FreshStart2Nodes 124.14
231 TestMultiNode/serial/DeployApp2Nodes 3.54
232 TestMultiNode/serial/PingHostFrom2Pods 0.76
233 TestMultiNode/serial/AddNode 57
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.62
236 TestMultiNode/serial/CopyFile 9.14
237 TestMultiNode/serial/StopNode 2.13
238 TestMultiNode/serial/StartAfterStop 7.31
239 TestMultiNode/serial/RestartKeepsNodes 80.03
240 TestMultiNode/serial/DeleteNode 5.21
241 TestMultiNode/serial/StopMultiNode 23.72
242 TestMultiNode/serial/RestartMultiNode 55.45
243 TestMultiNode/serial/ValidateNameConflict 25.37
248 TestPreload 108.48
250 TestScheduledStopUnix 98.8
253 TestInsufficientStorage 9.88
254 TestRunningBinaryUpgrade 48.47
256 TestKubernetesUpgrade 172.96
257 TestMissingContainerUpgrade 67.3
261 TestStoppedBinaryUpgrade/Setup 0.67
262 TestStoppedBinaryUpgrade/Upgrade 68.46
267 TestNetworkPlugins/group/false 9.8
279 TestPause/serial/Start 74.41
280 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
283 TestNoKubernetes/serial/StartWithK8s 24.45
284 TestNoKubernetes/serial/StartWithStopK8s 8.68
285 TestNoKubernetes/serial/Start 5.03
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
287 TestNoKubernetes/serial/ProfileList 13.52
288 TestNoKubernetes/serial/Stop 1.23
289 TestNoKubernetes/serial/StartNoArgs 6.6
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
291 TestPause/serial/SecondStartNoReconfiguration 27.06
292 TestPause/serial/Pause 0.89
293 TestPause/serial/VerifyStatus 0.33
294 TestPause/serial/Unpause 1.03
295 TestPause/serial/PauseAgain 1.05
296 TestPause/serial/DeletePaused 4.26
297 TestPause/serial/VerifyDeletedResources 0.62
298 TestNetworkPlugins/group/auto/Start 73.86
299 TestNetworkPlugins/group/kindnet/Start 47.8
300 TestNetworkPlugins/group/calico/Start 55.25
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.2
304 TestNetworkPlugins/group/auto/KubeletFlags 0.29
305 TestNetworkPlugins/group/auto/NetCatPod 10.19
306 TestNetworkPlugins/group/kindnet/DNS 0.13
307 TestNetworkPlugins/group/kindnet/Localhost 0.11
308 TestNetworkPlugins/group/kindnet/HairPin 0.11
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/auto/DNS 0.13
311 TestNetworkPlugins/group/auto/Localhost 0.11
312 TestNetworkPlugins/group/auto/HairPin 0.11
313 TestNetworkPlugins/group/calico/KubeletFlags 0.28
314 TestNetworkPlugins/group/calico/NetCatPod 11.19
315 TestNetworkPlugins/group/calico/DNS 0.15
316 TestNetworkPlugins/group/calico/Localhost 0.12
317 TestNetworkPlugins/group/calico/HairPin 0.13
318 TestNetworkPlugins/group/custom-flannel/Start 60.56
319 TestNetworkPlugins/group/enable-default-cni/Start 70.02
320 TestNetworkPlugins/group/flannel/Start 62.99
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
323 TestNetworkPlugins/group/custom-flannel/DNS 0.13
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
326 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
327 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
330 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
331 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
332 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
333 TestNetworkPlugins/group/flannel/NetCatPod 11.19
334 TestNetworkPlugins/group/bridge/Start 64.58
335 TestNetworkPlugins/group/flannel/DNS 0.19
336 TestNetworkPlugins/group/flannel/Localhost 0.13
337 TestNetworkPlugins/group/flannel/HairPin 0.18
339 TestStartStop/group/old-k8s-version/serial/FirstStart 57.73
341 TestStartStop/group/no-preload/serial/FirstStart 61.8
343 TestStartStop/group/embed-certs/serial/FirstStart 45.17
344 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
345 TestNetworkPlugins/group/bridge/NetCatPod 11.2
346 TestStartStop/group/old-k8s-version/serial/DeployApp 8.26
347 TestNetworkPlugins/group/bridge/DNS 0.13
348 TestNetworkPlugins/group/bridge/Localhost 0.11
349 TestNetworkPlugins/group/bridge/HairPin 0.11
350 TestStartStop/group/embed-certs/serial/DeployApp 8.24
351 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
352 TestStartStop/group/old-k8s-version/serial/Stop 12.09
353 TestStartStop/group/no-preload/serial/DeployApp 8.24
354 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
355 TestStartStop/group/embed-certs/serial/Stop 13.24
356 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.83
357 TestStartStop/group/no-preload/serial/Stop 11.97
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
359 TestStartStop/group/old-k8s-version/serial/SecondStart 51.09
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.86
362 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
363 TestStartStop/group/embed-certs/serial/SecondStart 49.35
364 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
365 TestStartStop/group/no-preload/serial/SecondStart 51.68
366 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.26
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
369 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
370 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.02
371 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
373 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
374 TestStartStop/group/old-k8s-version/serial/Pause 2.82
375 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
377 TestStartStop/group/embed-certs/serial/Pause 3.2
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
379 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.57
381 TestStartStop/group/newest-cni/serial/FirstStart 29.03
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
383 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
384 TestStartStop/group/no-preload/serial/Pause 3.23
385 TestStartStop/group/newest-cni/serial/DeployApp 0
386 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
387 TestStartStop/group/newest-cni/serial/Stop 2.34
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
389 TestStartStop/group/newest-cni/serial/SecondStart 15.32
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
393 TestStartStop/group/newest-cni/serial/Pause 2.7
394 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
396 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
397 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.63
x
+
TestDownloadOnly/v1.28.0/json-events (5.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-726194 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-726194 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.261207778s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 13:46:43.860443  498696 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0908 13:46:43.860578  498696 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-494960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-726194
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-726194: exit status 85 (64.764682ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-726194 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-726194 │ jenkins │ v1.36.0 │ 08 Sep 25 13:46 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:46:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:46:38.642638  498708 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:46:38.642759  498708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:46:38.642772  498708 out.go:374] Setting ErrFile to fd 2...
	I0908 13:46:38.642778  498708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:46:38.642976  498708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
	W0908 13:46:38.643142  498708 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21508-494960/.minikube/config/config.json: open /home/jenkins/minikube-integration/21508-494960/.minikube/config/config.json: no such file or directory
	I0908 13:46:38.643730  498708 out.go:368] Setting JSON to true
	I0908 13:46:38.644852  498708 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12545,"bootTime":1757326654,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:46:38.644925  498708 start.go:140] virtualization: kvm guest
	I0908 13:46:38.647195  498708 out.go:99] [download-only-726194] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0908 13:46:38.647345  498708 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21508-494960/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 13:46:38.647410  498708 notify.go:220] Checking for updates...
	I0908 13:46:38.648893  498708 out.go:171] MINIKUBE_LOCATION=21508
	I0908 13:46:38.650385  498708 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:46:38.651709  498708 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	I0908 13:46:38.652867  498708 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	I0908 13:46:38.653851  498708 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 13:46:38.655628  498708 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:46:38.655877  498708 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:46:38.679106  498708 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:46:38.679210  498708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:46:38.727483  498708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 13:46:38.718110739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:46:38.727611  498708 docker.go:318] overlay module found
	I0908 13:46:38.729339  498708 out.go:99] Using the docker driver based on user configuration
	I0908 13:46:38.729413  498708 start.go:304] selected driver: docker
	I0908 13:46:38.729425  498708 start.go:918] validating driver "docker" against <nil>
	I0908 13:46:38.729543  498708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:46:38.776857  498708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 13:46:38.768156979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:46:38.777097  498708 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:46:38.777874  498708 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0908 13:46:38.778089  498708 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:46:38.779817  498708 out.go:171] Using Docker driver with root privileges
	I0908 13:46:38.781029  498708 cni.go:84] Creating CNI manager for ""
	I0908 13:46:38.781096  498708 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:46:38.781130  498708 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 13:46:38.781212  498708 start.go:348] cluster config:
	{Name:download-only-726194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-726194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:46:38.782375  498708 out.go:99] Starting "download-only-726194" primary control-plane node in "download-only-726194" cluster
	I0908 13:46:38.782391  498708 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 13:46:38.783609  498708 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:46:38.783634  498708 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 13:46:38.783832  498708 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:46:38.801256  498708 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:46:38.801468  498708 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 13:46:38.801588  498708 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:46:38.806077  498708 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0908 13:46:38.806104  498708 cache.go:58] Caching tarball of preloaded images
	I0908 13:46:38.806223  498708 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 13:46:38.807962  498708 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 13:46:38.807984  498708 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 13:46:38.860414  498708 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21508-494960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0908 13:46:42.182718  498708 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 13:46:42.288781  498708 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 13:46:42.288883  498708 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21508-494960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 13:46:43.200861  498708 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0908 13:46:43.201205  498708 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/download-only-726194/config.json ...
	I0908 13:46:43.201237  498708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/download-only-726194/config.json: {Name:mkd9a95204189dce65f6ad75b910c531ec9e1de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:46:43.201396  498708 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 13:46:43.201587  498708 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21508-494960/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-726194 host does not exist
	  To start a cluster, run: "minikube start -p download-only-726194"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-726194
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-038960 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-038960 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.159116888s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 13:46:49.436707  498696 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0908 13:46:49.436783  498696 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-494960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-038960
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-038960: exit status 85 (63.66155ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-726194 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-726194 │ jenkins │ v1.36.0 │ 08 Sep 25 13:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 13:46 UTC │ 08 Sep 25 13:46 UTC │
	│ delete  │ -p download-only-726194                                                                                                                                                   │ download-only-726194 │ jenkins │ v1.36.0 │ 08 Sep 25 13:46 UTC │ 08 Sep 25 13:46 UTC │
	│ start   │ -o=json --download-only -p download-only-038960 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-038960 │ jenkins │ v1.36.0 │ 08 Sep 25 13:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:46:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:46:44.321633  499051 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:46:44.321928  499051 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:46:44.321941  499051 out.go:374] Setting ErrFile to fd 2...
	I0908 13:46:44.321947  499051 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:46:44.322185  499051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
	I0908 13:46:44.322804  499051 out.go:368] Setting JSON to true
	I0908 13:46:44.323827  499051 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12550,"bootTime":1757326654,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:46:44.323948  499051 start.go:140] virtualization: kvm guest
	I0908 13:46:44.325928  499051 out.go:99] [download-only-038960] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 13:46:44.326113  499051 notify.go:220] Checking for updates...
	I0908 13:46:44.327511  499051 out.go:171] MINIKUBE_LOCATION=21508
	I0908 13:46:44.328805  499051 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:46:44.329916  499051 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	I0908 13:46:44.330981  499051 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	I0908 13:46:44.332035  499051 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 13:46:44.334063  499051 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:46:44.334347  499051 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:46:44.355910  499051 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:46:44.356047  499051 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:46:44.405919  499051 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:50 SystemTime:2025-09-08 13:46:44.396550626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:46:44.406023  499051 docker.go:318] overlay module found
	I0908 13:46:44.407660  499051 out.go:99] Using the docker driver based on user configuration
	I0908 13:46:44.407690  499051 start.go:304] selected driver: docker
	I0908 13:46:44.407697  499051 start.go:918] validating driver "docker" against <nil>
	I0908 13:46:44.407800  499051 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:46:44.456986  499051 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:50 SystemTime:2025-09-08 13:46:44.447908033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:46:44.457167  499051 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:46:44.457710  499051 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0908 13:46:44.457880  499051 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:46:44.459689  499051 out.go:171] Using Docker driver with root privileges
	I0908 13:46:44.460764  499051 cni.go:84] Creating CNI manager for ""
	I0908 13:46:44.460839  499051 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:46:44.460858  499051 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 13:46:44.460938  499051 start.go:348] cluster config:
	{Name:download-only-038960 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-038960 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:46:44.462108  499051 out.go:99] Starting "download-only-038960" primary control-plane node in "download-only-038960" cluster
	I0908 13:46:44.462129  499051 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 13:46:44.463102  499051 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:46:44.463128  499051 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:46:44.463157  499051 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:46:44.479630  499051 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:46:44.479812  499051 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 13:46:44.479831  499051 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 13:46:44.479838  499051 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 13:46:44.479845  499051 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 13:46:44.490945  499051 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 13:46:44.490998  499051 cache.go:58] Caching tarball of preloaded images
	I0908 13:46:44.491173  499051 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:46:44.492782  499051 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0908 13:46:44.492804  499051 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 13:46:44.525223  499051 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21508-494960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 13:46:47.907622  499051 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 13:46:47.907728  499051 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21508-494960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 13:46:48.726827  499051 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 13:46:48.727203  499051 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/download-only-038960/config.json ...
	I0908 13:46:48.727239  499051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/download-only-038960/config.json: {Name:mk151da06708afcfaedf1a83eb7b922ad5ca0ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:46:48.727429  499051 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:46:48.727569  499051 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21508-494960/.minikube/cache/linux/amd64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-038960 host does not exist
	  To start a cluster, run: "minikube start -p download-only-038960"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-038960
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.13s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-474033 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-474033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-474033
--- PASS: TestDownloadOnlyKic (1.13s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 13:46:51.264389  498696 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-761448 --alsologtostderr --binary-mirror http://127.0.0.1:38697 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-761448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-761448
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (95.45s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-385479 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-385479 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m33.101360921s)
helpers_test.go:175: Cleaning up "offline-crio-385479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-385479
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-385479: (2.351171442s)
--- PASS: TestOffline (95.45s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-329194
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-329194: exit status 85 (57.810738ms)

                                                
                                                
-- stdout --
	* Profile "addons-329194" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-329194"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-329194
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-329194: exit status 85 (59.042541ms)

                                                
                                                
-- stdout --
	* Profile "addons-329194" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-329194"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (164.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-329194 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-329194 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m44.475878062s)
--- PASS: TestAddons/Setup (164.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-329194 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-329194 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-329194 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-329194 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dd72b31b-c3b8-48bd-8e7b-0d0071946dbd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dd72b31b-c3b8-48bd-8e7b-0d0071946dbd] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003946593s
addons_test.go:694: (dbg) Run:  kubectl --context addons-329194 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-329194 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-329194 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.286387ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-p964w" [6d73c2d9-766f-4810-8470-49c8eb663237] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002654134s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-49b5f" [062d62ef-0513-48e8-b537-0bea42dfb5c1] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003535738s
addons_test.go:392: (dbg) Run:  kubectl --context addons-329194 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-329194 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-329194 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.845306389s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 ip
2025/09/08 13:50:11 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.61s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.83s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.646528ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-329194
addons_test.go:332: (dbg) Run:  kubectl --context addons-329194 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jjphw" [3738a2a2-83f7-43e8-854d-2cbaec1ee361] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004386741s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.37s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.587694ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-l4dd8" [3f7d5632-6df9-4dd4-98cf-8cb0c9a7d9de] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00256332s
addons_test.go:463: (dbg) Run:  kubectl --context addons-329194 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 13:50:00.315398  498696 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 13:50:00.318721  498696 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 13:50:00.318749  498696 kapi.go:107] duration metric: took 3.376655ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.388953ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-329194 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-329194 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [74a482d4-bd19-441b-87c6-af457b52700a] Pending
helpers_test.go:352: "task-pv-pod" [74a482d4-bd19-441b-87c6-af457b52700a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [74a482d4-bd19-441b-87c6-af457b52700a] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.00415476s
addons_test.go:572: (dbg) Run:  kubectl --context addons-329194 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-329194 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-329194 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-329194 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-329194 delete pod task-pv-pod: (1.069681034s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-329194 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-329194 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-329194 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [e93030b8-d81d-4e78-8492-b6d771fde2d3] Pending
helpers_test.go:352: "task-pv-pod-restore" [e93030b8-d81d-4e78-8492-b6d771fde2d3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [e93030b8-d81d-4e78-8492-b6d771fde2d3] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003783502s
addons_test.go:614: (dbg) Run:  kubectl --context addons-329194 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-329194 delete pod task-pv-pod-restore: (1.620248705s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-329194 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-329194 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-329194 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.602995792s)
--- PASS: TestAddons/parallel/CSI (66.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (28.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-329194 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-lp4zc" [952a9e86-ed8a-4bb6-a8d6-2cab763f0583] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-lp4zc" [952a9e86-ed8a-4bb6-a8d6-2cab763f0583] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.003447379s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-329194 addons disable headlamp --alsologtostderr -v=1: (6.392795262s)
--- PASS: TestAddons/parallel/Headlamp (28.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-szl72" [e9626a08-2d03-44e6-bda8-5e469fe5a346] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002386816s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-329194 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-329194 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-329194 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2f15f8cb-5d50-4a47-93b3-e68cfcf03063] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2f15f8cb-5d50-4a47-93b3-e68cfcf03063] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2f15f8cb-5d50-4a47-93b3-e68cfcf03063] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003014081s
addons_test.go:967: (dbg) Run:  kubectl --context addons-329194 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 ssh "cat /opt/local-path-provisioner/pvc-c22325b4-df5e-4394-9d6a-1a970c9e4697_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-329194 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-329194 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-329194 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.554608562s)
--- PASS: TestAddons/parallel/LocalPath (57.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-85mff" [2d417a4a-cecf-44d0-acc0-41d50573d7b3] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004282643s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-l59th" [46960216-a76d-40b8-ac94-3d93b7eb9a0f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004144218s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-329194 addons disable yakd --alsologtostderr -v=1: (5.646988007s)
--- PASS: TestAddons/parallel/Yakd (10.65s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-j5795" [c99125c8-db90-424e-9eb6-12be65680109] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003426619s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-329194
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-329194: (11.83005879s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-329194
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-329194
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-329194
--- PASS: TestAddons/StoppedEnableDisable (12.08s)

                                                
                                    
x
+
TestCertOptions (31.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-155670 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-155670 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (29.231609383s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-155670 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-155670 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-155670 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-155670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-155670
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-155670: (1.963345591s)
--- PASS: TestCertOptions (31.81s)

                                                
                                    
x
+
TestCertExpiration (230.65s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-516045 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-516045 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (30.135171951s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-516045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-516045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (16.296734267s)
helpers_test.go:175: Cleaning up "cert-expiration-516045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-516045
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-516045: (4.219807717s)
--- PASS: TestCertExpiration (230.65s)

                                                
                                    
x
+
TestForceSystemdFlag (26.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-193202 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-193202 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.104281482s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-193202 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-193202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-193202
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-193202: (2.406121861s)
--- PASS: TestForceSystemdFlag (26.77s)

                                                
                                    
x
+
TestForceSystemdEnv (25.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-084556 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-084556 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.136080592s)
helpers_test.go:175: Cleaning up "force-systemd-env-084556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-084556
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-084556: (2.383015996s)
--- PASS: TestForceSystemdEnv (25.52s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.67s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0908 14:33:14.034072  498696 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 14:33:14.034253  498696 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0908 14:33:14.069325  498696 install.go:62] docker-machine-driver-kvm2: exit status 1
W0908 14:33:14.069468  498696 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 14:33:14.069520  498696 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate102460888/001/docker-machine-driver-kvm2
I0908 14:33:14.339371  498696 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate102460888/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc0005942a0 gz:0xc0005942a8 tar:0xc000594230 tar.bz2:0xc000594240 tar.gz:0xc000594260 tar.xz:0xc000594270 tar.zst:0xc000594290 tbz2:0xc000594240 tgz:0xc000594260 txz:0xc000594270 tzst:0xc000594290 xz:0xc0005942b0 zip:0xc000594310 zst:0xc0005942b8] Getters:map[file:0xc0008fcdf0 http:0xc0008a65f0 https:0xc0008a6640] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0908 14:33:14.339432  498696 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate102460888/001/docker-machine-driver-kvm2
I0908 14:33:15.596817  498696 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 14:33:15.596909  498696 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0908 14:33:15.630406  498696 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0908 14:33:15.630446  498696 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0908 14:33:15.630533  498696 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 14:33:15.630570  498696 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate102460888/002/docker-machine-driver-kvm2
I0908 14:33:15.793749  498696 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate102460888/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc0005942a0 gz:0xc0005942a8 tar:0xc000594230 tar.bz2:0xc000594240 tar.gz:0xc000594260 tar.xz:0xc000594270 tar.zst:0xc000594290 tbz2:0xc000594240 tgz:0xc000594260 txz:0xc000594270 tzst:0xc000594290 xz:0xc0005942b0 zip:0xc000594310 zst:0xc0005942b8] Getters:map[file:0xc000d1ca00 http:0xc00098bc20 https:0xc00098bc70] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0908 14:33:15.793801  498696 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate102460888/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (2.67s)

                                                
                                    
x
+
TestErrorSpam/setup (22.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-748551 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-748551 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-748551 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-748551 --driver=docker  --container-runtime=crio: (22.512741381s)
--- PASS: TestErrorSpam/setup (22.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 start --dry-run
--- PASS: TestErrorSpam/start (0.59s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 stop: (1.17700582s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748551 --log_dir /tmp/nospam-748551 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21508-494960/.minikube/files/etc/test/nested/copy/498696/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-746536 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-746536 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.9811495s)
--- PASS: TestFunctional/serial/StartWithProxy (41.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 13:54:28.155834  498696 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-746536 --alsologtostderr -v=8
E0908 13:54:37.167542  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:37.173974  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:37.185380  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:37.206847  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:37.248279  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:37.329742  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:37.491310  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:37.812772  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:38.454662  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:39.736376  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:42.299336  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:47.421153  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:54:57.663530  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-746536 --alsologtostderr -v=8: (40.255857003s)
functional_test.go:678: soft start took 40.256671882s for "functional-746536" cluster.
I0908 13:55:08.412092  498696 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (40.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-746536 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 cache add registry.k8s.io/pause:3.1: (1.038169151s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 cache add registry.k8s.io/pause:3.3: (1.106038347s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 cache add registry.k8s.io/pause:latest: (1.139790681s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-746536 /tmp/TestFunctionalserialCacheCmdcacheadd_local950745225/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 cache add minikube-local-cache-test:functional-746536
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 cache add minikube-local-cache-test:functional-746536: (1.041999358s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 cache delete minikube-local-cache-test:functional-746536
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-746536
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (272.358901ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 kubectl -- --context functional-746536 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-746536 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-746536 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 13:55:18.145644  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-746536 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.97095851s)
functional_test.go:776: restart took 33.971141008s for "functional-746536" cluster.
I0908 13:55:49.560041  498696 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (33.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-746536 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 logs: (1.38966919s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 logs --file /tmp/TestFunctionalserialLogsFileCmd3263714094/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 logs --file /tmp/TestFunctionalserialLogsFileCmd3263714094/001/logs.txt: (1.403710443s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-746536 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-746536
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-746536: exit status 115 (331.015986ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30631 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-746536 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 config get cpus: exit status 14 (58.865126ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 config get cpus: exit status 14 (53.752233ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-746536 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-746536 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 542041: os: process already finished
E0908 13:57:21.034565  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:59:37.167122  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:00:04.876928  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:04:37.166760  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-746536 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-746536 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (142.631041ms)

                                                
                                                
-- stdout --
	* [functional-746536] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:56:28.726070  541657 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:56:28.726173  541657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:28.726177  541657 out.go:374] Setting ErrFile to fd 2...
	I0908 13:56:28.726182  541657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:28.726392  541657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
	I0908 13:56:28.726945  541657 out.go:368] Setting JSON to false
	I0908 13:56:28.727965  541657 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13135,"bootTime":1757326654,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:56:28.728068  541657 start.go:140] virtualization: kvm guest
	I0908 13:56:28.729970  541657 out.go:179] * [functional-746536] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 13:56:28.731189  541657 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:56:28.731201  541657 notify.go:220] Checking for updates...
	I0908 13:56:28.733497  541657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:56:28.734640  541657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	I0908 13:56:28.735824  541657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	I0908 13:56:28.736903  541657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 13:56:28.737936  541657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:56:28.739367  541657 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:56:28.739864  541657 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:56:28.762283  541657 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:56:28.762372  541657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:56:28.810836  541657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 13:56:28.801501508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:56:28.810987  541657 docker.go:318] overlay module found
	I0908 13:56:28.813522  541657 out.go:179] * Using the docker driver based on existing profile
	I0908 13:56:28.814653  541657 start.go:304] selected driver: docker
	I0908 13:56:28.814665  541657 start.go:918] validating driver "docker" against &{Name:functional-746536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-746536 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:56:28.814747  541657 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:56:28.816601  541657 out.go:203] 
	W0908 13:56:28.817672  541657 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 13:56:28.818755  541657 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-746536 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-746536 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-746536 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (150.698103ms)

                                                
                                                
-- stdout --
	* [functional-746536] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:56:28.579622  541581 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:56:28.579792  541581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:28.579815  541581 out.go:374] Setting ErrFile to fd 2...
	I0908 13:56:28.579823  541581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:28.580165  541581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
	I0908 13:56:28.580851  541581 out.go:368] Setting JSON to false
	I0908 13:56:28.581995  541581 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13135,"bootTime":1757326654,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 13:56:28.582093  541581 start.go:140] virtualization: kvm guest
	I0908 13:56:28.584250  541581 out.go:179] * [functional-746536] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 13:56:28.585960  541581 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:56:28.585991  541581 notify.go:220] Checking for updates...
	I0908 13:56:28.588323  541581 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:56:28.589537  541581 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	I0908 13:56:28.590698  541581 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	I0908 13:56:28.591713  541581 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 13:56:28.592740  541581 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:56:28.594118  541581 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:56:28.594614  541581 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:56:28.617289  541581 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:56:28.617372  541581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:56:28.668213  541581 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-08 13:56:28.658605368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 13:56:28.668315  541581 docker.go:318] overlay module found
	I0908 13:56:28.670063  541581 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 13:56:28.671228  541581 start.go:304] selected driver: docker
	I0908 13:56:28.671254  541581 start.go:918] validating driver "docker" against &{Name:functional-746536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-746536 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:56:28.671344  541581 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:56:28.673582  541581 out.go:203] 
	W0908 13:56:28.674641  541581 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 13:56:28.675670  541581 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [eac5af3d-0c4c-492f-9878-8b12d5ad97d6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003627723s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-746536 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-746536 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-746536 get pvc myclaim -o=json
I0908 13:56:05.599788  498696 retry.go:31] will retry after 1.442661982s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:44a5788c-dd8a-409b-82db-b8d7031663e4 ResourceVersion:738 Generation:0 CreationTimestamp:2025-09-08 13:56:05 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-44a5788c-dd8a-409b-82db-b8d7031663e4 StorageClassName:0xc001a7ed50 VolumeMode:0xc001a7ed60 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-746536 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-746536 apply -f testdata/storage-provisioner/pod.yaml
I0908 13:56:07.243092  498696 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7803951e-a21a-4fed-b4fd-7ce3f4eabd7f] Pending
helpers_test.go:352: "sp-pod" [7803951e-a21a-4fed-b4fd-7ce3f4eabd7f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7803951e-a21a-4fed-b4fd-7ce3f4eabd7f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003608327s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-746536 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-746536 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-746536 delete -f testdata/storage-provisioner/pod.yaml: (1.185176625s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-746536 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2f1890e7-61a9-433c-bb9f-d179abdcbbb5] Pending
helpers_test.go:352: "sp-pod" [2f1890e7-61a9-433c-bb9f-d179abdcbbb5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2f1890e7-61a9-433c-bb9f-d179abdcbbb5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004264341s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-746536 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh -n functional-746536 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 cp functional-746536:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2788125607/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh -n functional-746536 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh -n functional-746536 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-746536 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-85tcz" [cfaec473-3bb8-45fe-989a-5998c50c202c] Pending
helpers_test.go:352: "mysql-5bb876957f-85tcz" [cfaec473-3bb8-45fe-989a-5998c50c202c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-85tcz" [cfaec473-3bb8-45fe-989a-5998c50c202c] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003637622s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-746536 exec mysql-5bb876957f-85tcz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-746536 exec mysql-5bb876957f-85tcz -- mysql -ppassword -e "show databases;": exit status 1 (271.867214ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 13:56:14.398302  498696 retry.go:31] will retry after 667.925045ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-746536 exec mysql-5bb876957f-85tcz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-746536 exec mysql-5bb876957f-85tcz -- mysql -ppassword -e "show databases;": exit status 1 (132.687843ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 13:56:15.199304  498696 retry.go:31] will retry after 1.777339022s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-746536 exec mysql-5bb876957f-85tcz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/498696/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "sudo cat /etc/test/nested/copy/498696/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/498696.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "sudo cat /etc/ssl/certs/498696.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/498696.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "sudo cat /usr/share/ca-certificates/498696.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4986962.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "sudo cat /etc/ssl/certs/4986962.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4986962.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "sudo cat /usr/share/ca-certificates/4986962.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-746536 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 ssh "sudo systemctl is-active docker": exit status 1 (285.314236ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 ssh "sudo systemctl is-active containerd": exit status 1 (285.269634ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-746536 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-746536
localhost/kicbase/echo-server:functional-746536
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-746536 image ls --format short --alsologtostderr:
I0908 13:56:31.721560  542527 out.go:360] Setting OutFile to fd 1 ...
I0908 13:56:31.721682  542527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:56:31.721692  542527 out.go:374] Setting ErrFile to fd 2...
I0908 13:56:31.721700  542527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:56:31.722026  542527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
I0908 13:56:31.722594  542527 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:56:31.722685  542527 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:56:31.723164  542527 cli_runner.go:164] Run: docker container inspect functional-746536 --format={{.State.Status}}
I0908 13:56:31.742081  542527 ssh_runner.go:195] Run: systemctl --version
I0908 13:56:31.742228  542527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-746536
I0908 13:56:31.764420  542527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/functional-746536/id_rsa Username:docker}
I0908 13:56:31.849043  542527 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-746536 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ docker.io/library/nginx                 │ alpine             │ 4a86014ec6994 │ 53.9MB │
│ docker.io/library/nginx                 │ latest             │ ad5708199ec7d │ 197MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/kicbase/echo-server           │ functional-746536  │ 9056ab77afb8e │ 4.94MB │
│ localhost/my-image                      │ functional-746536  │ aca8aeba3635d │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ localhost/minikube-local-cache-test     │ functional-746536  │ 609f176ba77c6 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-746536 image ls --format table --alsologtostderr:
I0908 13:56:34.476804  543148 out.go:360] Setting OutFile to fd 1 ...
I0908 13:56:34.477044  543148 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:56:34.477052  543148 out.go:374] Setting ErrFile to fd 2...
I0908 13:56:34.477056  543148 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:56:34.477252  543148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
I0908 13:56:34.477818  543148 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:56:34.477907  543148 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:56:34.478317  543148 cli_runner.go:164] Run: docker container inspect functional-746536 --format={{.State.Status}}
I0908 13:56:34.496707  543148 ssh_runner.go:195] Run: systemctl --version
I0908 13:56:34.496760  543148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-746536
I0908 13:56:34.516288  543148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/functional-746536/id_rsa Username:docker}
I0908 13:56:34.601214  543148 ssh_runner.go:195] Run: sudo crictl images --output json
2025/09/08 13:56:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-746536 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-746536"],"size":"4943877"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"90550c43ad2bcfd1
1fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21
871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a
9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a93498fc53be22cb9133ce068d12e64a6d3a6ff42b7895880f2e46fcca081286","repoDigests":["docker.io/library/a5c27e7cb0b8729a4c97e2a74172b2702a97c45916cd416ca21544e3278a51ff-tmp@sha256:6d22a1581842f1dcf9b80b736b188e29517067d5279220c4cfe6b6b5cc4ce464"],"repoTags":[],"size":"1465610"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d16650
1de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53949946"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.
io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57","docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7"],"repoTags":["docker.io/library/nginx:latest"],"size":"196544386"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772d
a31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"609f176ba77c61f97ccdb6dbc64542efec3da3c46a13861b528bad75960b50e4","repoDigests":["localhost/minikube-local-cache-test@sha256:169dfc1542412cec55dfa14a5c0eb55fc047e02f1812622cb6266955399729fc"],"repoTags":["localhost/minikube-local-cache-test:functional-746536"],"size":"3330"},{"id":"aca8aeba3635dc69e07231364aee316bba29fa13248288ca83e04477b7f14efb","repoDigests":["localhost/my-image@sha256:66053021771fa1e1342f141e2f99ed9e6701e2d1481fe27eff17c822cf428691"],"repoTags":["localhost/my-image:functional-746536"],"size":"1468193"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43
da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-746536 image ls --format json --alsologtostderr:
I0908 13:56:34.261016  543099 out.go:360] Setting OutFile to fd 1 ...
I0908 13:56:34.261304  543099 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:56:34.261314  543099 out.go:374] Setting ErrFile to fd 2...
I0908 13:56:34.261318  543099 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:56:34.261521  543099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
I0908 13:56:34.262069  543099 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:56:34.262156  543099 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:56:34.262539  543099 cli_runner.go:164] Run: docker container inspect functional-746536 --format={{.State.Status}}
I0908 13:56:34.282945  543099 ssh_runner.go:195] Run: systemctl --version
I0908 13:56:34.283004  543099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-746536
I0908 13:56:34.302142  543099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/functional-746536/id_rsa Username:docker}
I0908 13:56:34.384923  543099 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-746536 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
- docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7
repoTags:
- docker.io/library/nginx:latest
size: "196544386"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-746536
size: "4943877"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a
repoTags:
- docker.io/library/nginx:alpine
size: "53949946"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 609f176ba77c61f97ccdb6dbc64542efec3da3c46a13861b528bad75960b50e4
repoDigests:
- localhost/minikube-local-cache-test@sha256:169dfc1542412cec55dfa14a5c0eb55fc047e02f1812622cb6266955399729fc
repoTags:
- localhost/minikube-local-cache-test:functional-746536
size: "3330"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-746536 image ls --format yaml --alsologtostderr:
I0908 13:56:31.951899  542578 out.go:360] Setting OutFile to fd 1 ...
I0908 13:56:31.952035  542578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:56:31.952044  542578 out.go:374] Setting ErrFile to fd 2...
I0908 13:56:31.952049  542578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:56:31.952273  542578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
I0908 13:56:31.952947  542578 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:56:31.953047  542578 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:56:31.953468  542578 cli_runner.go:164] Run: docker container inspect functional-746536 --format={{.State.Status}}
I0908 13:56:31.971493  542578 ssh_runner.go:195] Run: systemctl --version
I0908 13:56:31.971556  542578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-746536
I0908 13:56:31.990011  542578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/functional-746536/id_rsa Username:docker}
I0908 13:56:32.073201  542578 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 ssh pgrep buildkitd: exit status 1 (256.131195ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image build -t localhost/my-image:functional-746536 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 image build -t localhost/my-image:functional-746536 testdata/build --alsologtostderr: (1.602706835s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-746536 image build -t localhost/my-image:functional-746536 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a93498fc53b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-746536
--> aca8aeba363
Successfully tagged localhost/my-image:functional-746536
aca8aeba3635dc69e07231364aee316bba29fa13248288ca83e04477b7f14efb
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-746536 image build -t localhost/my-image:functional-746536 testdata/build --alsologtostderr:
I0908 13:56:32.422654  542728 out.go:360] Setting OutFile to fd 1 ...
I0908 13:56:32.422956  542728 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:56:32.422968  542728 out.go:374] Setting ErrFile to fd 2...
I0908 13:56:32.422975  542728 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 13:56:32.423169  542728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
I0908 13:56:32.423799  542728 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:56:32.424592  542728 config.go:182] Loaded profile config "functional-746536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 13:56:32.425002  542728 cli_runner.go:164] Run: docker container inspect functional-746536 --format={{.State.Status}}
I0908 13:56:32.443641  542728 ssh_runner.go:195] Run: systemctl --version
I0908 13:56:32.443701  542728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-746536
I0908 13:56:32.461648  542728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/functional-746536/id_rsa Username:docker}
I0908 13:56:32.553953  542728 build_images.go:161] Building image from path: /tmp/build.3646373977.tar
I0908 13:56:32.554026  542728 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 13:56:32.562880  542728 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3646373977.tar
I0908 13:56:32.566564  542728 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3646373977.tar: stat -c "%s %y" /var/lib/minikube/build/build.3646373977.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3646373977.tar': No such file or directory
I0908 13:56:32.566600  542728 ssh_runner.go:362] scp /tmp/build.3646373977.tar --> /var/lib/minikube/build/build.3646373977.tar (3072 bytes)
I0908 13:56:32.589863  542728 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3646373977
I0908 13:56:32.598424  542728 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3646373977 -xf /var/lib/minikube/build/build.3646373977.tar
I0908 13:56:32.607194  542728 crio.go:315] Building image: /var/lib/minikube/build/build.3646373977
I0908 13:56:32.607272  542728 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-746536 /var/lib/minikube/build/build.3646373977 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0908 13:56:33.953171  542728 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-746536 /var/lib/minikube/build/build.3646373977 --cgroup-manager=cgroupfs: (1.345866418s)
I0908 13:56:33.953251  542728 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3646373977
I0908 13:56:33.962087  542728 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3646373977.tar
I0908 13:56:33.970365  542728 build_images.go:217] Built localhost/my-image:functional-746536 from /tmp/build.3646373977.tar
I0908 13:56:33.970403  542728 build_images.go:133] succeeded building to: functional-746536
I0908 13:56:33.970409  542728 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.017741636s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-746536
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image load --daemon kicbase/echo-server:functional-746536 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 image load --daemon kicbase/echo-server:functional-746536 --alsologtostderr: (1.217396825s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-746536 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-746536 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-746536 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-746536 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 536419: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image load --daemon kicbase/echo-server:functional-746536 --alsologtostderr
E0908 13:55:59.112628  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 image load --daemon kicbase/echo-server:functional-746536 --alsologtostderr: (1.139878498s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-746536 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-746536 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [56d20554-dbdc-4b29-b22b-2e03a9583784] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [56d20554-dbdc-4b29-b22b-2e03a9583784] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.00372927s
I0908 13:56:17.913205  498696 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-746536
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image load --daemon kicbase/echo-server:functional-746536 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image save kicbase/echo-server:functional-746536 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 image save kicbase/echo-server:functional-746536 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.398742625s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image rm kicbase/echo-server:functional-746536 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-746536
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 image save --daemon kicbase/echo-server:functional-746536 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 image save --daemon kicbase/echo-server:functional-746536 --alsologtostderr: (1.038104943s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-746536
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-746536 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.244.152 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-746536 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "313.567658ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "54.491943ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "310.310847ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "55.492869ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-746536 /tmp/TestFunctionalparallelMountCmdany-port298087833/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757339779193515532" to /tmp/TestFunctionalparallelMountCmdany-port298087833/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757339779193515532" to /tmp/TestFunctionalparallelMountCmdany-port298087833/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757339779193515532" to /tmp/TestFunctionalparallelMountCmdany-port298087833/001/test-1757339779193515532
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (258.264324ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:56:19.452095  498696 retry.go:31] will retry after 313.813992ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 13:56 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 13:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 13:56 test-1757339779193515532
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh cat /mount-9p/test-1757339779193515532
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-746536 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [3a704f60-baa2-4573-b940-8ec2165cc137] Pending
helpers_test.go:352: "busybox-mount" [3a704f60-baa2-4573-b940-8ec2165cc137] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [3a704f60-baa2-4573-b940-8ec2165cc137] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [3a704f60-baa2-4573-b940-8ec2165cc137] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003448814s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-746536 logs busybox-mount
I0908 13:56:23.668215  498696 detect.go:223] nested VM detected
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-746536 /tmp/TestFunctionalparallelMountCmdany-port298087833/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-746536 /tmp/TestFunctionalparallelMountCmdspecific-port327567790/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.735756ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:56:24.839780  498696 retry.go:31] will retry after 266.295932ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-746536 /tmp/TestFunctionalparallelMountCmdspecific-port327567790/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 ssh "sudo umount -f /mount-9p": exit status 1 (247.738902ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-746536 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-746536 /tmp/TestFunctionalparallelMountCmdspecific-port327567790/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-746536 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3736777664/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-746536 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3736777664/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-746536 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3736777664/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-746536 ssh "findmnt -T" /mount1: exit status 1 (302.864695ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:56:26.355168  498696 retry.go:31] will retry after 519.220803ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-746536 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-746536 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3736777664/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-746536 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3736777664/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-746536 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3736777664/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 service list: (1.686760583s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-746536 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-746536 service list -o json: (1.681970448s)
functional_test.go:1504: Took "1.682092762s" to run "out/minikube-linux-amd64 -p functional-746536 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-746536
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-746536
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-746536
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0908 14:09:37.167142  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-210079 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m16.491028117s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (197.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-210079 kubectl -- rollout status deployment/busybox: (2.721708069s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-db2dp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-w86z6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-zhs5x -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-db2dp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-w86z6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-zhs5x -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-db2dp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-w86z6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-zhs5x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-db2dp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-db2dp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-w86z6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-w86z6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-zhs5x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 kubectl -- exec busybox-7b57f96db7-zhs5x -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-210079 node add --alsologtostderr -v 5: (23.924481144s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-210079 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp testdata/cp-test.txt ha-210079:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3265442367/001/cp-test_ha-210079.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079:/home/docker/cp-test.txt ha-210079-m02:/home/docker/cp-test_ha-210079_ha-210079-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m02 "sudo cat /home/docker/cp-test_ha-210079_ha-210079-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079:/home/docker/cp-test.txt ha-210079-m03:/home/docker/cp-test_ha-210079_ha-210079-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m03 "sudo cat /home/docker/cp-test_ha-210079_ha-210079-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079:/home/docker/cp-test.txt ha-210079-m04:/home/docker/cp-test_ha-210079_ha-210079-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m04 "sudo cat /home/docker/cp-test_ha-210079_ha-210079-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp testdata/cp-test.txt ha-210079-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3265442367/001/cp-test_ha-210079-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m02:/home/docker/cp-test.txt ha-210079:/home/docker/cp-test_ha-210079-m02_ha-210079.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079 "sudo cat /home/docker/cp-test_ha-210079-m02_ha-210079.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m02:/home/docker/cp-test.txt ha-210079-m03:/home/docker/cp-test_ha-210079-m02_ha-210079-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m03 "sudo cat /home/docker/cp-test_ha-210079-m02_ha-210079-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m02:/home/docker/cp-test.txt ha-210079-m04:/home/docker/cp-test_ha-210079-m02_ha-210079-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m04 "sudo cat /home/docker/cp-test_ha-210079-m02_ha-210079-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp testdata/cp-test.txt ha-210079-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3265442367/001/cp-test_ha-210079-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m03:/home/docker/cp-test.txt ha-210079:/home/docker/cp-test_ha-210079-m03_ha-210079.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079 "sudo cat /home/docker/cp-test_ha-210079-m03_ha-210079.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m03:/home/docker/cp-test.txt ha-210079-m02:/home/docker/cp-test_ha-210079-m03_ha-210079-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m02 "sudo cat /home/docker/cp-test_ha-210079-m03_ha-210079-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m03:/home/docker/cp-test.txt ha-210079-m04:/home/docker/cp-test_ha-210079-m03_ha-210079-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m04 "sudo cat /home/docker/cp-test_ha-210079-m03_ha-210079-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp testdata/cp-test.txt ha-210079-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3265442367/001/cp-test_ha-210079-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m04:/home/docker/cp-test.txt ha-210079:/home/docker/cp-test_ha-210079-m04_ha-210079.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079 "sudo cat /home/docker/cp-test_ha-210079-m04_ha-210079.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m04:/home/docker/cp-test.txt ha-210079-m02:/home/docker/cp-test_ha-210079-m04_ha-210079-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m02 "sudo cat /home/docker/cp-test_ha-210079-m04_ha-210079-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 cp ha-210079-m04:/home/docker/cp-test.txt ha-210079-m03:/home/docker/cp-test_ha-210079-m04_ha-210079-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 ssh -n ha-210079-m03 "sudo cat /home/docker/cp-test_ha-210079-m04_ha-210079-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-210079 node stop m02 --alsologtostderr -v 5: (11.863460113s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-210079 status --alsologtostderr -v 5: exit status 7 (671.509513ms)

                                                
                                                
-- stdout --
	ha-210079
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-210079-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-210079-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-210079-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:10:45.289045  568025 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:10:45.289363  568025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:10:45.289375  568025 out.go:374] Setting ErrFile to fd 2...
	I0908 14:10:45.289382  568025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:10:45.289601  568025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
	I0908 14:10:45.289809  568025 out.go:368] Setting JSON to false
	I0908 14:10:45.289850  568025 mustload.go:65] Loading cluster: ha-210079
	I0908 14:10:45.289966  568025 notify.go:220] Checking for updates...
	I0908 14:10:45.290297  568025 config.go:182] Loaded profile config "ha-210079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:10:45.290325  568025 status.go:174] checking status of ha-210079 ...
	I0908 14:10:45.290842  568025 cli_runner.go:164] Run: docker container inspect ha-210079 --format={{.State.Status}}
	I0908 14:10:45.311677  568025 status.go:371] ha-210079 host status = "Running" (err=<nil>)
	I0908 14:10:45.311708  568025 host.go:66] Checking if "ha-210079" exists ...
	I0908 14:10:45.312010  568025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-210079
	I0908 14:10:45.331267  568025 host.go:66] Checking if "ha-210079" exists ...
	I0908 14:10:45.331621  568025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:10:45.331675  568025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-210079
	I0908 14:10:45.351167  568025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/ha-210079/id_rsa Username:docker}
	I0908 14:10:45.438526  568025 ssh_runner.go:195] Run: systemctl --version
	I0908 14:10:45.442888  568025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:10:45.454408  568025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:10:45.504555  568025 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 14:10:45.494857362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 14:10:45.505188  568025 kubeconfig.go:125] found "ha-210079" server: "https://192.168.49.254:8443"
	I0908 14:10:45.505229  568025 api_server.go:166] Checking apiserver status ...
	I0908 14:10:45.505283  568025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:10:45.517237  568025 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1516/cgroup
	I0908 14:10:45.526863  568025 api_server.go:182] apiserver freezer: "5:freezer:/docker/ebed692e7ac8b68a02e0c5bdf7be858803c3e298745a56251aa9e722616fd620/crio/crio-4396960b16f2e2eadc7b5dc9429b0e6a35e6af239451d07cdf15b8aada9e18b5"
	I0908 14:10:45.526922  568025 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ebed692e7ac8b68a02e0c5bdf7be858803c3e298745a56251aa9e722616fd620/crio/crio-4396960b16f2e2eadc7b5dc9429b0e6a35e6af239451d07cdf15b8aada9e18b5/freezer.state
	I0908 14:10:45.536241  568025 api_server.go:204] freezer state: "THAWED"
	I0908 14:10:45.536277  568025 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 14:10:45.542052  568025 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 14:10:45.542082  568025 status.go:463] ha-210079 apiserver status = Running (err=<nil>)
	I0908 14:10:45.542093  568025 status.go:176] ha-210079 status: &{Name:ha-210079 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:10:45.542110  568025 status.go:174] checking status of ha-210079-m02 ...
	I0908 14:10:45.542395  568025 cli_runner.go:164] Run: docker container inspect ha-210079-m02 --format={{.State.Status}}
	I0908 14:10:45.560396  568025 status.go:371] ha-210079-m02 host status = "Stopped" (err=<nil>)
	I0908 14:10:45.560421  568025 status.go:384] host is not running, skipping remaining checks
	I0908 14:10:45.560429  568025 status.go:176] ha-210079-m02 status: &{Name:ha-210079-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:10:45.560533  568025 status.go:174] checking status of ha-210079-m03 ...
	I0908 14:10:45.560808  568025 cli_runner.go:164] Run: docker container inspect ha-210079-m03 --format={{.State.Status}}
	I0908 14:10:45.579243  568025 status.go:371] ha-210079-m03 host status = "Running" (err=<nil>)
	I0908 14:10:45.579307  568025 host.go:66] Checking if "ha-210079-m03" exists ...
	I0908 14:10:45.579616  568025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-210079-m03
	I0908 14:10:45.597547  568025 host.go:66] Checking if "ha-210079-m03" exists ...
	I0908 14:10:45.597822  568025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:10:45.597859  568025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-210079-m03
	I0908 14:10:45.616747  568025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/ha-210079-m03/id_rsa Username:docker}
	I0908 14:10:45.702016  568025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:10:45.714268  568025 kubeconfig.go:125] found "ha-210079" server: "https://192.168.49.254:8443"
	I0908 14:10:45.714304  568025 api_server.go:166] Checking apiserver status ...
	I0908 14:10:45.714338  568025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:10:45.725880  568025 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	I0908 14:10:45.736414  568025 api_server.go:182] apiserver freezer: "5:freezer:/docker/1dad72481efd80297562d83ca8a39613b4a6c1f418788b665b82b0c43482c1b3/crio/crio-c18f2f5cae21979c1076e1dc90c79cc846f73b96937ea0b8f1655028ccdf8403"
	I0908 14:10:45.736536  568025 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1dad72481efd80297562d83ca8a39613b4a6c1f418788b665b82b0c43482c1b3/crio/crio-c18f2f5cae21979c1076e1dc90c79cc846f73b96937ea0b8f1655028ccdf8403/freezer.state
	I0908 14:10:45.745432  568025 api_server.go:204] freezer state: "THAWED"
	I0908 14:10:45.745478  568025 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 14:10:45.749992  568025 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 14:10:45.750020  568025 status.go:463] ha-210079-m03 apiserver status = Running (err=<nil>)
	I0908 14:10:45.750030  568025 status.go:176] ha-210079-m03 status: &{Name:ha-210079-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:10:45.750049  568025 status.go:174] checking status of ha-210079-m04 ...
	I0908 14:10:45.750319  568025 cli_runner.go:164] Run: docker container inspect ha-210079-m04 --format={{.State.Status}}
	I0908 14:10:45.768686  568025 status.go:371] ha-210079-m04 host status = "Running" (err=<nil>)
	I0908 14:10:45.768716  568025 host.go:66] Checking if "ha-210079-m04" exists ...
	I0908 14:10:45.768969  568025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-210079-m04
	I0908 14:10:45.786984  568025 host.go:66] Checking if "ha-210079-m04" exists ...
	I0908 14:10:45.787296  568025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:10:45.787339  568025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-210079-m04
	I0908 14:10:45.807208  568025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/ha-210079-m04/id_rsa Username:docker}
	I0908 14:10:45.897926  568025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:10:45.909315  568025 status.go:176] ha-210079-m04 status: &{Name:ha-210079-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 node start m02 --alsologtostderr -v 5
E0908 14:10:57.123435  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:57.129845  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:57.141264  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:57.163333  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:57.205263  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:57.287483  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:57.449158  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:57.771456  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:58.413714  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:10:59.695724  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:11:00.238562  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:11:02.257486  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:11:07.379343  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-210079 node start m02 --alsologtostderr -v 5: (27.599455227s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (114.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 stop --alsologtostderr -v 5
E0908 14:11:17.620775  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-210079 stop --alsologtostderr -v 5: (20.5829951s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 start --wait true --alsologtostderr -v 5
E0908 14:11:38.102189  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:12:19.063999  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-210079 start --wait true --alsologtostderr -v 5: (1m33.893120559s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (114.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-210079 node delete m03 --alsologtostderr -v 5: (10.593328776s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (25.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 stop --alsologtostderr -v 5
E0908 14:13:40.988898  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-210079 stop --alsologtostderr -v 5: (24.992470759s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-210079 status --alsologtostderr -v 5: exit status 7 (111.225437ms)

                                                
                                                
-- stdout --
	ha-210079
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-210079-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-210079-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:13:47.715597  584696 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:13:47.715881  584696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:13:47.715891  584696 out.go:374] Setting ErrFile to fd 2...
	I0908 14:13:47.715895  584696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:13:47.716081  584696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
	I0908 14:13:47.716279  584696 out.go:368] Setting JSON to false
	I0908 14:13:47.716313  584696 mustload.go:65] Loading cluster: ha-210079
	I0908 14:13:47.716447  584696 notify.go:220] Checking for updates...
	I0908 14:13:47.716696  584696 config.go:182] Loaded profile config "ha-210079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:13:47.716720  584696 status.go:174] checking status of ha-210079 ...
	I0908 14:13:47.717183  584696 cli_runner.go:164] Run: docker container inspect ha-210079 --format={{.State.Status}}
	I0908 14:13:47.735813  584696 status.go:371] ha-210079 host status = "Stopped" (err=<nil>)
	I0908 14:13:47.735845  584696 status.go:384] host is not running, skipping remaining checks
	I0908 14:13:47.735854  584696 status.go:176] ha-210079 status: &{Name:ha-210079 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:13:47.735908  584696 status.go:174] checking status of ha-210079-m02 ...
	I0908 14:13:47.736290  584696 cli_runner.go:164] Run: docker container inspect ha-210079-m02 --format={{.State.Status}}
	I0908 14:13:47.756325  584696 status.go:371] ha-210079-m02 host status = "Stopped" (err=<nil>)
	I0908 14:13:47.756354  584696 status.go:384] host is not running, skipping remaining checks
	I0908 14:13:47.756361  584696 status.go:176] ha-210079-m02 status: &{Name:ha-210079-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:13:47.756384  584696 status.go:174] checking status of ha-210079-m04 ...
	I0908 14:13:47.756679  584696 cli_runner.go:164] Run: docker container inspect ha-210079-m04 --format={{.State.Status}}
	I0908 14:13:47.776192  584696 status.go:371] ha-210079-m04 host status = "Stopped" (err=<nil>)
	I0908 14:13:47.776216  584696 status.go:384] host is not running, skipping remaining checks
	I0908 14:13:47.776223  584696 status.go:176] ha-210079-m04 status: &{Name:ha-210079-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (25.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0908 14:14:37.167353  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-210079 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (57.913056592s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (67.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-210079 node add --control-plane --alsologtostderr -v 5: (1m6.267565173s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-210079 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (67.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (72.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-619981 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0908 14:16:24.830364  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-619981 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m12.668594265s)
--- PASS: TestJSONOutput/start/Command (72.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-619981 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-619981 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-619981 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-619981 --output=json --user=testUser: (5.776191665s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-169201 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-169201 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (69.163317ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4381a74b-cd40-48cc-bd7f-34a32da1e430","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-169201] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b08766ed-8d73-434f-b3c1-bae02aa1aff9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21508"}}
	{"specversion":"1.0","id":"e6158ac3-41d7-4c54-8b61-83bec3cc4e82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c11b983a-cc17-4171-8db6-0900e6a4053c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig"}}
	{"specversion":"1.0","id":"97688305-33f0-4873-8944-1fc4ff4fff9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube"}}
	{"specversion":"1.0","id":"506b7be8-200f-4a93-918e-4efb90d83579","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"93af9997-e608-43c8-bb54-5c61c3e4c11f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0455a95f-0140-4943-a04d-5b7e6ba5b438","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-169201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-169201
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-066143 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-066143 --network=: (27.232750776s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-066143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-066143
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-066143: (2.116661393s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.37s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-522521 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-522521 --network=bridge: (23.480966989s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-522521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-522521
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-522521: (1.943358272s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.44s)

                                                
                                    
x
+
TestKicExistingNetwork (26.32s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 14:18:21.272159  498696 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 14:18:21.288396  498696 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 14:18:21.288496  498696 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 14:18:21.288526  498696 cli_runner.go:164] Run: docker network inspect existing-network
W0908 14:18:21.305689  498696 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 14:18:21.305728  498696 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 14:18:21.305747  498696 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 14:18:21.305882  498696 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 14:18:21.323589  498696 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7028836bb739 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:fc:66:32:b5:6b} reservation:<nil>}
I0908 14:18:21.324006  498696 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000595b00}
I0908 14:18:21.324034  498696 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 14:18:21.324076  498696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 14:18:21.377173  498696 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-202704 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-202704 --network=existing-network: (24.262460438s)
helpers_test.go:175: Cleaning up "existing-network-202704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-202704
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-202704: (1.921665095s)
I0908 14:18:47.578798  498696 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.32s)

                                                
                                    
x
+
TestKicCustomSubnet (25.2s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-372158 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-372158 --subnet=192.168.60.0/24: (23.099894085s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-372158 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-372158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-372158
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-372158: (2.075747086s)
--- PASS: TestKicCustomSubnet (25.20s)

                                                
                                    
x
+
TestKicStaticIP (28.79s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-897815 --static-ip=192.168.200.200
E0908 14:19:37.166995  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-897815 --static-ip=192.168.200.200: (26.594460346s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-897815 ip
helpers_test.go:175: Cleaning up "static-ip-897815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-897815
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-897815: (2.058283196s)
--- PASS: TestKicStaticIP (28.79s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (50.86s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-515976 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-515976 --driver=docker  --container-runtime=crio: (22.559871327s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-529789 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-529789 --driver=docker  --container-runtime=crio: (23.086680354s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-515976
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-529789
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-529789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-529789
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-529789: (1.838810822s)
helpers_test.go:175: Cleaning up "first-515976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-515976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-515976: (2.201772574s)
--- PASS: TestMinikubeProfile (50.86s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-898633 --memory=3072 --mount-string /tmp/TestMountStartserial891685595/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-898633 --memory=3072 --mount-string /tmp/TestMountStartserial891685595/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.552972201s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-898633 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-916580 --memory=3072 --mount-string /tmp/TestMountStartserial891685595/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-916580 --memory=3072 --mount-string /tmp/TestMountStartserial891685595/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.593520957s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-916580 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-898633 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-898633 --alsologtostderr -v=5: (1.593033013s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-916580 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-916580
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-916580: (1.182081449s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.11s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-916580
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-916580: (6.110414161s)
--- PASS: TestMountStart/serial/RestartStopped (7.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-916580 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (124.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-045781 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0908 14:20:57.123362  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-045781 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m3.675677125s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (124.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-045781 -- rollout status deployment/busybox: (2.042632809s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- exec busybox-7b57f96db7-dz7ww -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- exec busybox-7b57f96db7-hnddd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- exec busybox-7b57f96db7-dz7ww -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- exec busybox-7b57f96db7-hnddd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- exec busybox-7b57f96db7-dz7ww -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- exec busybox-7b57f96db7-hnddd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- exec busybox-7b57f96db7-dz7ww -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- exec busybox-7b57f96db7-dz7ww -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- exec busybox-7b57f96db7-hnddd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045781 -- exec busybox-7b57f96db7-hnddd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-045781 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-045781 -v=5 --alsologtostderr: (56.397570203s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-045781 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp testdata/cp-test.txt multinode-045781:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp multinode-045781:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3495421684/001/cp-test_multinode-045781.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp multinode-045781:/home/docker/cp-test.txt multinode-045781-m02:/home/docker/cp-test_multinode-045781_multinode-045781-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m02 "sudo cat /home/docker/cp-test_multinode-045781_multinode-045781-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp multinode-045781:/home/docker/cp-test.txt multinode-045781-m03:/home/docker/cp-test_multinode-045781_multinode-045781-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m03 "sudo cat /home/docker/cp-test_multinode-045781_multinode-045781-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp testdata/cp-test.txt multinode-045781-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp multinode-045781-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3495421684/001/cp-test_multinode-045781-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp multinode-045781-m02:/home/docker/cp-test.txt multinode-045781:/home/docker/cp-test_multinode-045781-m02_multinode-045781.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781 "sudo cat /home/docker/cp-test_multinode-045781-m02_multinode-045781.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp multinode-045781-m02:/home/docker/cp-test.txt multinode-045781-m03:/home/docker/cp-test_multinode-045781-m02_multinode-045781-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m03 "sudo cat /home/docker/cp-test_multinode-045781-m02_multinode-045781-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp testdata/cp-test.txt multinode-045781-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp multinode-045781-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3495421684/001/cp-test_multinode-045781-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp multinode-045781-m03:/home/docker/cp-test.txt multinode-045781:/home/docker/cp-test_multinode-045781-m03_multinode-045781.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781 "sudo cat /home/docker/cp-test_multinode-045781-m03_multinode-045781.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 cp multinode-045781-m03:/home/docker/cp-test.txt multinode-045781-m02:/home/docker/cp-test_multinode-045781-m03_multinode-045781-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 ssh -n multinode-045781-m02 "sudo cat /home/docker/cp-test_multinode-045781-m03_multinode-045781-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-045781 node stop m03: (1.186339453s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-045781 status: exit status 7 (467.752917ms)

                                                
                                                
-- stdout --
	multinode-045781
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-045781-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-045781-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-045781 status --alsologtostderr: exit status 7 (470.910921ms)

                                                
                                                
-- stdout --
	multinode-045781
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-045781-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-045781-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:24:13.257318  650087 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:24:13.257570  650087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:24:13.257578  650087 out.go:374] Setting ErrFile to fd 2...
	I0908 14:24:13.257582  650087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:24:13.257775  650087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
	I0908 14:24:13.257957  650087 out.go:368] Setting JSON to false
	I0908 14:24:13.257987  650087 mustload.go:65] Loading cluster: multinode-045781
	I0908 14:24:13.258125  650087 notify.go:220] Checking for updates...
	I0908 14:24:13.258368  650087 config.go:182] Loaded profile config "multinode-045781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:24:13.258387  650087 status.go:174] checking status of multinode-045781 ...
	I0908 14:24:13.258822  650087 cli_runner.go:164] Run: docker container inspect multinode-045781 --format={{.State.Status}}
	I0908 14:24:13.278424  650087 status.go:371] multinode-045781 host status = "Running" (err=<nil>)
	I0908 14:24:13.278485  650087 host.go:66] Checking if "multinode-045781" exists ...
	I0908 14:24:13.278835  650087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-045781
	I0908 14:24:13.296212  650087 host.go:66] Checking if "multinode-045781" exists ...
	I0908 14:24:13.296570  650087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:24:13.296643  650087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-045781
	I0908 14:24:13.314895  650087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/multinode-045781/id_rsa Username:docker}
	I0908 14:24:13.406049  650087 ssh_runner.go:195] Run: systemctl --version
	I0908 14:24:13.410349  650087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:24:13.421104  650087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:24:13.471365  650087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-09-08 14:24:13.462339325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 14:24:13.471932  650087 kubeconfig.go:125] found "multinode-045781" server: "https://192.168.67.2:8443"
	I0908 14:24:13.471967  650087 api_server.go:166] Checking apiserver status ...
	I0908 14:24:13.472003  650087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:24:13.482838  650087 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	I0908 14:24:13.492009  650087 api_server.go:182] apiserver freezer: "5:freezer:/docker/0f95481b0ccbac90e89dea8a2a24f6225df245f3fa65cd4ed30a54c2b640fe98/crio/crio-19fcd4b0d171478b00823cb380bd5fa1c26ff2271cded4008f6912f35f63d63a"
	I0908 14:24:13.492088  650087 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0f95481b0ccbac90e89dea8a2a24f6225df245f3fa65cd4ed30a54c2b640fe98/crio/crio-19fcd4b0d171478b00823cb380bd5fa1c26ff2271cded4008f6912f35f63d63a/freezer.state
	I0908 14:24:13.500074  650087 api_server.go:204] freezer state: "THAWED"
	I0908 14:24:13.500113  650087 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 14:24:13.504551  650087 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 14:24:13.504589  650087 status.go:463] multinode-045781 apiserver status = Running (err=<nil>)
	I0908 14:24:13.504604  650087 status.go:176] multinode-045781 status: &{Name:multinode-045781 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:24:13.504623  650087 status.go:174] checking status of multinode-045781-m02 ...
	I0908 14:24:13.504950  650087 cli_runner.go:164] Run: docker container inspect multinode-045781-m02 --format={{.State.Status}}
	I0908 14:24:13.523455  650087 status.go:371] multinode-045781-m02 host status = "Running" (err=<nil>)
	I0908 14:24:13.523486  650087 host.go:66] Checking if "multinode-045781-m02" exists ...
	I0908 14:24:13.523793  650087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-045781-m02
	I0908 14:24:13.541989  650087 host.go:66] Checking if "multinode-045781-m02" exists ...
	I0908 14:24:13.542305  650087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:24:13.542366  650087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-045781-m02
	I0908 14:24:13.560619  650087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33280 SSHKeyPath:/home/jenkins/minikube-integration/21508-494960/.minikube/machines/multinode-045781-m02/id_rsa Username:docker}
	I0908 14:24:13.645697  650087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:24:13.656775  650087 status.go:176] multinode-045781-m02 status: &{Name:multinode-045781-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:24:13.656813  650087 status.go:174] checking status of multinode-045781-m03 ...
	I0908 14:24:13.657074  650087 cli_runner.go:164] Run: docker container inspect multinode-045781-m03 --format={{.State.Status}}
	I0908 14:24:13.674713  650087 status.go:371] multinode-045781-m03 host status = "Stopped" (err=<nil>)
	I0908 14:24:13.674742  650087 status.go:384] host is not running, skipping remaining checks
	I0908 14:24:13.674767  650087 status.go:176] multinode-045781-m03 status: &{Name:multinode-045781-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-045781 node start m03 -v=5 --alsologtostderr: (6.644877743s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-045781
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-045781
E0908 14:24:37.167720  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-045781: (24.771905981s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-045781 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-045781 --wait=true -v=5 --alsologtostderr: (55.15432128s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-045781
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-045781 node delete m03: (4.652082545s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 stop
E0908 14:25:57.131997  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-045781 stop: (23.541210163s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-045781 status: exit status 7 (88.702942ms)

                                                
                                                
-- stdout --
	multinode-045781
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-045781-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-045781 status --alsologtostderr: exit status 7 (88.971147ms)

                                                
                                                
-- stdout --
	multinode-045781
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-045781-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:26:09.910267  659755 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:26:09.910397  659755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:26:09.910404  659755 out.go:374] Setting ErrFile to fd 2...
	I0908 14:26:09.910410  659755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:26:09.910660  659755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
	I0908 14:26:09.910863  659755 out.go:368] Setting JSON to false
	I0908 14:26:09.910904  659755 mustload.go:65] Loading cluster: multinode-045781
	I0908 14:26:09.911012  659755 notify.go:220] Checking for updates...
	I0908 14:26:09.911353  659755 config.go:182] Loaded profile config "multinode-045781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:26:09.911380  659755 status.go:174] checking status of multinode-045781 ...
	I0908 14:26:09.911837  659755 cli_runner.go:164] Run: docker container inspect multinode-045781 --format={{.State.Status}}
	I0908 14:26:09.930139  659755 status.go:371] multinode-045781 host status = "Stopped" (err=<nil>)
	I0908 14:26:09.930184  659755 status.go:384] host is not running, skipping remaining checks
	I0908 14:26:09.930194  659755 status.go:176] multinode-045781 status: &{Name:multinode-045781 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:26:09.930240  659755 status.go:174] checking status of multinode-045781-m02 ...
	I0908 14:26:09.930546  659755 cli_runner.go:164] Run: docker container inspect multinode-045781-m02 --format={{.State.Status}}
	I0908 14:26:09.947470  659755 status.go:371] multinode-045781-m02 host status = "Stopped" (err=<nil>)
	I0908 14:26:09.947505  659755 status.go:384] host is not running, skipping remaining checks
	I0908 14:26:09.947513  659755 status.go:176] multinode-045781-m02 status: &{Name:multinode-045781-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-045781 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-045781 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (54.884808833s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045781 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-045781
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-045781-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-045781-m02 --driver=docker  --container-runtime=crio: exit status 14 (69.638472ms)

                                                
                                                
-- stdout --
	* [multinode-045781-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-045781-m02' is duplicated with machine name 'multinode-045781-m02' in profile 'multinode-045781'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-045781-m03 --driver=docker  --container-runtime=crio
E0908 14:27:20.192670  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-045781-m03 --driver=docker  --container-runtime=crio: (23.141670901s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-045781
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-045781: exit status 80 (273.702034ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-045781 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-045781-m03 already exists in multinode-045781-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-045781-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-045781-m03: (1.831215292s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.37s)

                                                
                                    
x
+
TestPreload (108.48s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-769183 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0908 14:27:40.240032  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-769183 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (49.8764097s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-769183 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-769183 image pull gcr.io/k8s-minikube/busybox: (1.273767459s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-769183
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-769183: (5.733596681s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-769183 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-769183 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.103553748s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-769183 image list
helpers_test.go:175: Cleaning up "test-preload-769183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-769183
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-769183: (2.276253859s)
--- PASS: TestPreload (108.48s)

                                                
                                    
x
+
TestScheduledStopUnix (98.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-644904 --memory=3072 --driver=docker  --container-runtime=crio
E0908 14:29:37.167450  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-644904 --memory=3072 --driver=docker  --container-runtime=crio: (22.233679439s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-644904 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-644904 -n scheduled-stop-644904
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-644904 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 14:29:45.778751  498696 retry.go:31] will retry after 51.425µs: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.779915  498696 retry.go:31] will retry after 155.258µs: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.781063  498696 retry.go:31] will retry after 257.179µs: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.782219  498696 retry.go:31] will retry after 427.089µs: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.783405  498696 retry.go:31] will retry after 443.738µs: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.784572  498696 retry.go:31] will retry after 809.934µs: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.785709  498696 retry.go:31] will retry after 1.684289ms: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.787949  498696 retry.go:31] will retry after 1.759327ms: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.790150  498696 retry.go:31] will retry after 3.055045ms: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.793324  498696 retry.go:31] will retry after 3.032164ms: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.796482  498696 retry.go:31] will retry after 8.429004ms: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.805762  498696 retry.go:31] will retry after 5.868271ms: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.812036  498696 retry.go:31] will retry after 7.594211ms: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.820309  498696 retry.go:31] will retry after 13.069997ms: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.833585  498696 retry.go:31] will retry after 24.593162ms: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
I0908 14:29:45.858862  498696 retry.go:31] will retry after 29.928969ms: open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/scheduled-stop-644904/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-644904 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-644904 -n scheduled-stop-644904
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-644904
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-644904 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-644904
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-644904: exit status 7 (70.388444ms)

                                                
                                                
-- stdout --
	scheduled-stop-644904
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-644904 -n scheduled-stop-644904
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-644904 -n scheduled-stop-644904: exit status 7 (69.691524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-644904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-644904
E0908 14:30:57.123667  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-644904: (5.183369594s)
--- PASS: TestScheduledStopUnix (98.80s)

                                                
                                    
x
+
TestInsufficientStorage (9.88s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-804548 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-804548 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.563370092s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aa1e423a-f74e-4c3c-a9f4-104434eeb80d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-804548] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a3e63a6-90d6-42d7-9027-7f674ce69933","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21508"}}
	{"specversion":"1.0","id":"bab06959-7ba2-4858-853e-e6a048c168aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e3f5be0c-32d5-4bbf-8245-f87066bafb12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig"}}
	{"specversion":"1.0","id":"7aa56eb5-1d5d-42e0-8ad7-c4eae22a06aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube"}}
	{"specversion":"1.0","id":"2aff2ea9-f200-4bc5-aa52-407b0736d6ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"83b400c3-a49e-48b5-b26b-86d101f0e771","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"34cb534c-7b2e-46e9-ba93-5b696b08cb82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"945aa3fe-ef2b-4c60-8554-d4a1db8f2dfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"eadb9def-21d7-4755-902a-8faad69ad940","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"15950f5f-114e-443f-bcab-1d0e82b430f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d6098236-8aef-4b4e-9f5d-6fe5b3f84a6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-804548\" primary control-plane node in \"insufficient-storage-804548\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"04d431f7-148d-48a7-8b17-b017ec292d24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5f87aebb-2098-4d4b-8516-710a1885f6e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f56df50a-96ca-463f-bbfc-3d7238668c67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-804548 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-804548 --output=json --layout=cluster: exit status 7 (257.067192ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-804548","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-804548","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 14:31:09.734876  681659 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-804548" does not appear in /home/jenkins/minikube-integration/21508-494960/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-804548 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-804548 --output=json --layout=cluster: exit status 7 (259.113536ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-804548","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-804548","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 14:31:09.994588  681757 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-804548" does not appear in /home/jenkins/minikube-integration/21508-494960/kubeconfig
	E0908 14:31:10.004906  681757 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/insufficient-storage-804548/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-804548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-804548
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-804548: (1.800756834s)
--- PASS: TestInsufficientStorage (9.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (48.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.148090938 start -p running-upgrade-792561 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.148090938 start -p running-upgrade-792561 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.695088045s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-792561 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-792561 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.295182075s)
helpers_test.go:175: Cleaning up "running-upgrade-792561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-792561
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-792561: (1.920570916s)
--- PASS: TestRunningBinaryUpgrade (48.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (172.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-436040 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-436040 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.789146107s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-436040
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-436040: (1.209462085s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-436040 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-436040 status --format={{.Host}}: exit status 7 (102.191267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-436040 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-436040 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m38.311972298s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-436040 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-436040 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-436040 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (81.138435ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-436040] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-436040
	    minikube start -p kubernetes-upgrade-436040 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4360402 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-436040 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-436040 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-436040 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.111838146s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-436040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-436040
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-436040: (4.286415813s)
--- PASS: TestKubernetesUpgrade (172.96s)

                                                
                                    
x
+
TestMissingContainerUpgrade (67.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3473365335 start -p missing-upgrade-328410 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3473365335 start -p missing-upgrade-328410 --memory=3072 --driver=docker  --container-runtime=crio: (22.470109825s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-328410
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-328410
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-328410 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-328410 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.393897587s)
helpers_test.go:175: Cleaning up "missing-upgrade-328410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-328410
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-328410: (2.103923465s)
--- PASS: TestMissingContainerUpgrade (67.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (68.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3420698090 start -p stopped-upgrade-792236 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3420698090 start -p stopped-upgrade-792236 --memory=3072 --vm-driver=docker  --container-runtime=crio: (48.523099913s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3420698090 -p stopped-upgrade-792236 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3420698090 -p stopped-upgrade-792236 stop: (3.341304021s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-792236 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-792236 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.596937454s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (68.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-588246 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-588246 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (1.183984423s)

                                                
                                                
-- stdout --
	* [false-588246] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:31:16.368957  683672 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:31:16.369085  683672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:31:16.369096  683672 out.go:374] Setting ErrFile to fd 2...
	I0908 14:31:16.369103  683672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:31:16.369302  683672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-494960/.minikube/bin
	I0908 14:31:16.370104  683672 out.go:368] Setting JSON to false
	I0908 14:31:16.371478  683672 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15222,"bootTime":1757326654,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 14:31:16.371563  683672 start.go:140] virtualization: kvm guest
	I0908 14:31:16.427853  683672 out.go:179] * [false-588246] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 14:31:16.491637  683672 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 14:31:16.491764  683672 notify.go:220] Checking for updates...
	I0908 14:31:16.595109  683672 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:31:16.659025  683672 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	I0908 14:31:16.741910  683672 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	I0908 14:31:16.825600  683672 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 14:31:16.996402  683672 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:31:17.080805  683672 config.go:182] Loaded profile config "kubernetes-upgrade-436040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0908 14:31:17.080948  683672 config.go:182] Loaded profile config "offline-crio-385479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:31:17.081075  683672 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:31:17.104222  683672 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 14:31:17.104374  683672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:31:17.276854  683672 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:56 SystemTime:2025-09-08 14:31:17.147836918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 14:31:17.277043  683672 docker.go:318] overlay module found
	I0908 14:31:17.325987  683672 out.go:179] * Using the docker driver based on user configuration
	I0908 14:31:17.327340  683672 start.go:304] selected driver: docker
	I0908 14:31:17.327366  683672 start.go:918] validating driver "docker" against <nil>
	I0908 14:31:17.327384  683672 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:31:17.344609  683672 out.go:203] 
	W0908 14:31:17.408223  683672 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0908 14:31:17.471528  683672 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-588246 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-588246" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-588246" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-588246

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588246"

                                                
                                                
----------------------- debugLogs end: false-588246 [took: 8.311674921s] --------------------------------
helpers_test.go:175: Cleaning up "false-588246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-588246
--- PASS: TestNetworkPlugins/group/false (9.80s)

                                                
                                    
x
+
TestPause/serial/Start (74.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-353085 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-353085 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m14.414279194s)
--- PASS: TestPause/serial/Start (74.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-792236
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030356 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-030356 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (87.306465ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-030356] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-494960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-494960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030356 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030356 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.114807151s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-030356 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030356 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030356 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.499422387s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-030356 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-030356 status -o json: exit status 2 (287.363363ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-030356","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-030356
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-030356: (1.89582313s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030356 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030356 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.029282979s)
--- PASS: TestNoKubernetes/serial/Start (5.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-030356 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-030356 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.972157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (13.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (12.796403614s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (13.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-030356
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-030356: (1.229435504s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030356 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030356 --driver=docker  --container-runtime=crio: (6.595757056s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-030356 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-030356 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.877164ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-353085 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-353085 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.035992423s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.06s)

                                                
                                    
x
+
TestPause/serial/Pause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-353085 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-353085 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-353085 --output=json --layout=cluster: exit status 2 (330.546517ms)

                                                
                                                
-- stdout --
	{"Name":"pause-353085","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-353085","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-353085 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-353085 --alsologtostderr -v=5: (1.026263547s)
--- PASS: TestPause/serial/Unpause (1.03s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-353085 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-353085 --alsologtostderr -v=5: (1.049060222s)
--- PASS: TestPause/serial/PauseAgain (1.05s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-353085 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-353085 --alsologtostderr -v=5: (4.263386295s)
--- PASS: TestPause/serial/DeletePaused (4.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-353085
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-353085: exit status 1 (20.85362ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-353085: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m13.864299539s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (47.80325486s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0908 14:34:37.166939  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (55.253587555s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-jv579" [dd796f37-87f1-47ec-8dcc-5e4ee098d503] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003902302s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-588246 "pgrep -a kubelet"
I0908 14:35:18.112472  498696 config.go:182] Loaded profile config "kindnet-588246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-588246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bc8vr" [c3900c28-f0a1-4eb6-8cff-ede64c40a6ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bc8vr" [c3900c28-f0a1-4eb6-8cff-ede64c40a6ec] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004550548s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-588246 "pgrep -a kubelet"
I0908 14:35:23.200200  498696 config.go:182] Loaded profile config "auto-588246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-588246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lf78z" [5c038d06-a3aa-45df-947b-1cfcdbc8b4d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lf78z" [5c038d06-a3aa-45df-947b-1cfcdbc8b4d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003548074s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-588246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-vmqk7" [5a6e23a8-a256-4ecc-9278-21c11dcac6fe] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-vmqk7" [5a6e23a8-a256-4ecc-9278-21c11dcac6fe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003837955s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-588246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-588246 "pgrep -a kubelet"
I0908 14:35:38.123993  498696 config.go:182] Loaded profile config "calico-588246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-588246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zcv9z" [c6695078-301b-45c3-a2c2-912966874141] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zcv9z" [c6695078-301b-45c3-a2c2-912966874141] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004636001s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-588246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.557903576s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0908 14:35:57.124157  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/functional-746536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m10.016749347s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m2.985406514s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-588246 "pgrep -a kubelet"
I0908 14:36:51.169517  498696 config.go:182] Loaded profile config "custom-flannel-588246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-588246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hjrrs" [ff040f80-3496-4cf2-880c-57b5d3a1c54e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hjrrs" [ff040f80-3496-4cf2-880c-57b5d3a1c54e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004250333s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-588246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-588246 "pgrep -a kubelet"
I0908 14:37:05.833956  498696 config.go:182] Loaded profile config "enable-default-cni-588246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-588246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p6b6c" [dcfec18f-b686-4982-9882-5b5bf87cc1f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p6b6c" [dcfec18f-b686-4982-9882-5b5bf87cc1f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004143082s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-6qlc5" [df84dd93-6c6e-4a01-9a6e-d39c2651c504] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003614455s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-588246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-588246 "pgrep -a kubelet"
I0908 14:37:19.654414  498696 config.go:182] Loaded profile config "flannel-588246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-588246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5x257" [540d8b84-5d6a-40c4-aac5-1d8607ebb751] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5x257" [540d8b84-5d6a-40c4-aac5-1d8607ebb751] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003522144s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-588246 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m4.577424161s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-588246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (57.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-946137 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-946137 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (57.726599111s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (57.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-527846 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-527846 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m1.799322657s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-037138 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-037138 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (45.173816847s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-588246 "pgrep -a kubelet"
I0908 14:38:25.106786  498696 config.go:182] Loaded profile config "bridge-588246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-588246 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m6zkc" [9bbe5e87-aac4-4401-b1d0-9f4dcf803e9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m6zkc" [9bbe5e87-aac4-4401-b1d0-9f4dcf803e9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003570222s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-946137 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a97a83fe-509e-45d1-bb7e-90fe7ce66472] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a97a83fe-509e-45d1-bb7e-90fe7ce66472] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004132983s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-946137 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-588246 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-588246 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-037138 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5b82ca57-5a12-42d2-85dc-fc393b0b9c8e] Pending
helpers_test.go:352: "busybox" [5b82ca57-5a12-42d2-85dc-fc393b0b9c8e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5b82ca57-5a12-42d2-85dc-fc393b0b9c8e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.0044186s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-037138 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-946137 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-946137 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.106348692s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-946137 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-946137 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-946137 --alsologtostderr -v=3: (12.090377017s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-527846 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f58f9de8-3317-47c6-af71-fdeb5661607b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f58f9de8-3317-47c6-af71-fdeb5661607b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004346649s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-527846 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-037138 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-037138 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-037138 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-037138 --alsologtostderr -v=3: (13.242668538s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-527846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-527846 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-527846 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-527846 --alsologtostderr -v=3: (11.971848737s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-946137 -n old-k8s-version-946137
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-946137 -n old-k8s-version-946137: exit status 7 (76.38068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-946137 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-946137 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-946137 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.763395699s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-946137 -n old-k8s-version-946137
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-410183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-410183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (46.860081357s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-037138 -n embed-certs-037138
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-037138 -n embed-certs-037138: exit status 7 (79.532988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-037138 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-037138 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-037138 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (49.000636523s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-037138 -n embed-certs-037138
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-527846 -n no-preload-527846
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-527846 -n no-preload-527846: exit status 7 (101.037743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-527846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-527846 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 14:39:37.167473  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/addons-329194/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-527846 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (51.346125303s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-527846 -n no-preload-527846
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-410183 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c4c7167d-81e9-47d7-84f7-5462a8d9b391] Pending
helpers_test.go:352: "busybox" [c4c7167d-81e9-47d7-84f7-5462a8d9b391] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c4c7167d-81e9-47d7-84f7-5462a8d9b391] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003903279s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-410183 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2tcjs" [fd21b6aa-f837-46e9-a5fc-6c63fd7cc8ca] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003755441s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-410183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-410183 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r46hw" [56beaba0-65a4-43d1-b1b8-1d7fe86e63fd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003783928s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-410183 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-410183 --alsologtostderr -v=3: (12.018543494s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2tcjs" [fd21b6aa-f837-46e9-a5fc-6c63fd7cc8ca] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003938635s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-946137 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r46hw" [56beaba0-65a4-43d1-b1b8-1d7fe86e63fd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003671175s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-037138 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-946137 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-946137 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-946137 -n old-k8s-version-946137
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-946137 -n old-k8s-version-946137: exit status 2 (308.668517ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-946137 -n old-k8s-version-946137
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-946137 -n old-k8s-version-946137: exit status 2 (294.715943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-946137 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-946137 -n old-k8s-version-946137
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-946137 -n old-k8s-version-946137
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2dslw" [427f8c16-a490-4d2d-8166-220cd5ac3a39] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00342193s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-037138 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-037138 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-037138 -n embed-certs-037138
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-037138 -n embed-certs-037138: exit status 2 (369.139555ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-037138 -n embed-certs-037138
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-037138 -n embed-certs-037138: exit status 2 (356.97483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-037138 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-037138 -n embed-certs-037138
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-037138 -n embed-certs-037138
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-410183 -n default-k8s-diff-port-410183
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-410183 -n default-k8s-diff-port-410183: exit status 7 (82.288494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-410183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-410183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-410183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (48.249413379s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-410183 -n default-k8s-diff-port-410183
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-887045 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-887045 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (29.033550478s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2dslw" [427f8c16-a490-4d2d-8166-220cd5ac3a39] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00435282s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-527846 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-527846 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-527846 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-527846 -n no-preload-527846
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-527846 -n no-preload-527846: exit status 2 (326.704653ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-527846 -n no-preload-527846
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-527846 -n no-preload-527846: exit status 2 (428.761486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-527846 --alsologtostderr -v=1
E0908 14:40:11.812149  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/kindnet-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:40:11.819167  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/kindnet-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:40:11.830451  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/kindnet-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:40:11.851972  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/kindnet-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:40:11.893344  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/kindnet-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:40:11.974707  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/kindnet-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-527846 -n no-preload-527846
E0908 14:40:12.136592  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/kindnet-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-527846 -n no-preload-527846
E0908 14:40:12.460633  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/kindnet-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-887045 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0908 14:40:32.311265  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/kindnet-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:40:32.483686  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/calico-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-887045 --alsologtostderr -v=3
E0908 14:40:33.125330  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/calico-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:40:33.626805  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/auto-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:40:34.406872  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/calico-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-887045 --alsologtostderr -v=3: (2.344658711s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-887045 -n newest-cni-887045
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-887045 -n newest-cni-887045: exit status 7 (79.74634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-887045 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-887045 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 14:40:36.968942  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/calico-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:40:42.091186  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/calico-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:40:43.868680  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/auto-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-887045 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (14.984536686s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-887045 -n newest-cni-887045
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-887045 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-887045 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-887045 -n newest-cni-887045
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-887045 -n newest-cni-887045: exit status 2 (292.398599ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-887045 -n newest-cni-887045
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-887045 -n newest-cni-887045: exit status 2 (287.951395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-887045 --alsologtostderr -v=1
E0908 14:40:52.333012  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/calico-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:40:52.793508  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/kindnet-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-887045 -n newest-cni-887045
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-887045 -n newest-cni-887045
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-n2ms6" [ef010b4c-6565-492b-b41b-32e4c40c1a83] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00426529s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-n2ms6" [ef010b4c-6565-492b-b41b-32e4c40c1a83] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003722752s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-410183 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-410183 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-410183 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-410183 -n default-k8s-diff-port-410183
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-410183 -n default-k8s-diff-port-410183: exit status 2 (286.081556ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-410183 -n default-k8s-diff-port-410183
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-410183 -n default-k8s-diff-port-410183: exit status 2 (286.308854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-410183 --alsologtostderr -v=1
E0908 14:41:04.350350  498696 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-494960/.minikube/profiles/auto-588246/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-410183 -n default-k8s-diff-port-410183
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-410183 -n default-k8s-diff-port-410183
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.63s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-329194 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-588246 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-588246" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-588246" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-588246

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588246"

                                                
                                                
----------------------- debugLogs end: kubenet-588246 [took: 4.260149191s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-588246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-588246
--- SKIP: TestNetworkPlugins/group/kubenet (4.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-588246 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-588246" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-588246

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-588246" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588246"

                                                
                                                
----------------------- debugLogs end: cilium-588246 [took: 4.420340519s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-588246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-588246
--- SKIP: TestNetworkPlugins/group/cilium (4.60s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-566161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-566161
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard